NetworkingNetwork+

Load Balancing for CompTIA Network+ N10-009

Load balancing distributes incoming network traffic across multiple servers to prevent any single server from becoming a bottleneck. CompTIA Network+ N10-009 tests load balancing concepts in implementation and high-availability contexts. You must understand load balancing algorithms, health checks, and session persistence, and how load balancers fit into network design for scalability and availability.

7 min
3 sections · 7 exam key points
1 practice questions

Load Balancing Fundamentals

A load balancer sits in front of a server pool and distributes client requests across the pool members. From the client's perspective, they are communicating with a single virtual IP (VIP); the load balancer transparently forwards requests to back-end servers. When a server fails, the load balancer detects the failure (via health checks) and removes it from rotation — remaining servers absorb its traffic.

Layer 4 load balancing: distributes traffic based on TCP/UDP headers (IP address and port) without inspecting application content. Fast and efficient. Cannot make content-based decisions. Layer 7 load balancing: inspects HTTP/HTTPS content — can route requests based on URL path, cookies, headers, or host name. Enables content switching: /images/* to image servers, /api/* to API servers.

Load Balancing Algorithms

Round-robin: requests distributed sequentially across servers. Simple, equal distribution assuming servers have equal capacity and requests take equal time. Weighted round-robin: same as round-robin but servers with higher weight receive proportionally more requests — accommodates servers with different capacities.

Least connections: new requests sent to the server with the fewest active connections. Better for sessions with variable duration. Weighted least connections accounts for server capacity differences. IP hash: the client's source IP determines which server receives the request — the same client always goes to the same server (provides simple persistence without tracking state). Random: requests assigned randomly — simple but potentially uneven.

Session Persistence and Health Checks

Session persistence (sticky sessions): ensures a client's requests always reach the same back-end server during a session. Important for stateful applications that store session data locally on the server. Methods: source IP affinity, cookie-based persistence (load balancer inserts a cookie identifying the server). Without persistence, a user could be redirected to a different server mid-session and lose their state.

Health checks: the load balancer periodically tests each server's availability. Types: ICMP ping (basic — is the server alive?), TCP connection check (is the port open?), HTTP/HTTPS GET request (is the application responding correctly?). Servers failing health checks are removed from rotation. Servers recovering are added back. Active-passive load balancing: one server is primary, standby activates only on failure — provides failover, not load distribution.

Load Balancing Algorithms

AlgorithmMethodBest For
Round-robinSequentialEqual servers, equal request duration
Weighted round-robinSequential by weightServers with different capacities
Least connectionsFewest active sessionsVariable session duration
IP hashSource IP determines serverSimple persistence without cookies
RandomRandom assignmentSimple, roughly equal servers

Key exam facts — Network+

  • Load balancer distributes traffic across server pool behind a single virtual IP (VIP)
  • L4 LB: IP/port based; L7 LB: content-aware (URL, cookies, headers)
  • Health checks remove failed servers from rotation automatically
  • Session persistence (sticky sessions) ensures same client → same server
  • Round-robin: equal distribution; least connections: load-aware distribution
  • Active-passive: primary + standby failover; active-active: both serve traffic
  • Weighted algorithms accommodate servers with different capacities

Common exam traps

Load balancers eliminate the need for high-availability design

Load balancers improve availability and performance, but the back-end servers still need to be designed for redundancy. A load balancer itself can be a single point of failure — deploy load balancers in HA pairs

Round-robin is always the best algorithm

Round-robin assumes equal server capacity and equal request processing time — it performs poorly when requests vary significantly in resource consumption. Least-connections or weighted algorithms are better for real-world workloads

Practice questions — Load Balancing

These questions are representative of what you will see on Network+ exams. The correct answer and explanation are shown immediately below each question.

Q1.A web application stores user session data in server memory. Which load balancing feature must be configured to prevent users from losing their session when requests are distributed?

A.Round-robin
B.Health checks
C.Session persistence (sticky sessions)
D.Least connections

Explanation: Session persistence (sticky sessions) ensures all requests from the same client are directed to the same back-end server. This is essential for stateful applications that store session data locally — if a user is sent to a different server mid-session, the session data is missing and the user may be logged out or lose work.

Frequently asked questions — Load Balancing

What is the difference between a load balancer and a reverse proxy?

A reverse proxy sits in front of servers and forwards client requests to back-end servers — it can provide caching, SSL termination, compression, and security filtering. A load balancer is a specific type of reverse proxy focused on distributing traffic across multiple servers. Modern load balancers combine both functions: load distribution, SSL termination, health checking, and Layer 7 content routing.

Practice this topic

Test yourself on Load Balancing

JT Exams routes you to questions in your exact weak areas — automatically, after every session.

No credit card · Cancel anytime

Related certification topics