Deep dive into Network Load Balancing and Proxying

22 March, 2024 | Technical

Load balancing is crucial for building reliable distributed systems, optimizing workload allocation across various computing resources like computers, clusters, and network links. Its aim is to enhance resource utilization, maximize throughput, minimize response time, and prevent overload of any single resource. Utilizing multiple components with load balancing increases reliability and availability through redundancy. Typically, load balancing involves specialized software or hardware, such as a multilayer switch or a Domain Name System server process.

At its core, a load balancer sits between clients and backends, performing key functions:

Service Discovery

Identifying available backends and their addresses for communication.

Health Check

Evaluating the health and readiness of backends to accept requests.

Load Balance

Distributing individual requests across healthy backends using algorithms. Leveraging load balancing in distributed systems offers several advantages:

Name Abstraction

Clients can address the load balancer instead of knowing every backend, delegating name resolution.

Fault Tolerance

Through health checks and algorithms, a load balancer can route around issues with malfunctioning or overloaded backends, allowing operators to address issues at their convenience.

Performance and Costs Benefits

Load balancing can localize request traffic within zones, reducing latency and minimizing overall system costs by optimizing bandwidth consumption.

Load balancer versus proxy

In discussions about network load balancers, the terms “load balancer” and “proxy” are often used interchangeably in the industry. This post will consider these terms as generally equivalent. While not all proxies serve as load balancers, the majority of proxies primarily function as load balancers.

L4 Load Balancing

In today’s load balancing discourse within the industry, solutions are commonly divided into two main categories: L4 and L7. These refer to layers 4 and 7 of the OSI model, respectively. However, while this model provides a framework, it doesn’t fully capture the complexity of modern load balancing solutions.

L4 load balancers typically operate at the TCP/UDP connection/session level, essentially shuffling bytes to ensure they reach the correct backend. Yet, they don’t consider the specific application details of these bytes, which could be from various protocols like HTTP, Redis, or MongoDB.

L7 Load Balancing

As all modern protocols are advancing towards multiplexing and persistent connections for efficiency, especially with the overhead of creating encrypted TLS connections, the mismatch with L4 load balancers becomes more apparent over time. This issue is addressed by L7 load balancers, which offer significant benefits by enabling inspection of application traffic.

L7, according to the OSI model, encompasses various layers of load balancing abstraction. For example, when dealing with HTTP traffic, we have the following sublayers:
Optional Transport Layer Security (TLS), which we’ll consider L7 for this discussion despite ongoing debates among networking experts.
The physical HTTP protocol (HTTP/1 or HTTP/2).
The logical HTTP protocol, including headers, body data, and trailers.
Messaging protocols like gRPC, REST, etc.

Sophisticated L7 load balancers may offer features for each of these sublayers, while others may focus on a smaller subset, still placing them within the L7 category. Compared to the L4 category, the landscape of L7 load balancers is much more intricate from a feature perspective. And it’s important to note that this discussion has focused solely on HTTP; other L7 application protocols such as Redis, Kafka, MongoDB, etc., also benefit from L7 load balancing.

Relevance of L4 Load Balancers

Despite the anticipation that L7 load balancers will eventually replace L4 load balancers for service-to-service communication, L4 load balancers remain highly relevant, especially at the edge of large distributed architectures.

Placing dedicated L4 load balancers before L7 load balancers at the edge offers several advantages:
L7 load balancers handle a smaller fraction of raw traffic load compared to optimized L4 load balancers due to their sophisticated analysis, transformation, and routing of application traffic. This makes L4 load balancers more effective in handling certain types of DoS attacks, such as SYN floods and generic packet flood attacks.
L7 load balancers undergo more active development, deployment, and bug occurrences compared to L4 load balancers. Having an L4 load balancer in front simplifies deployment processes during L7 load balancer deployments.

Due to the complexity of L7 load balancers’ functionality, they are more prone to bugs. Having an L4 load balancer that can route around failures and anomalies contributes to a more stable overall system.

Load Balancer Topologies

The middle proxy topology is often the most straightforward to implement. However, it is vulnerable to being a single point of failure, has constraints on scalability, and operates opaquely.
The edge proxy topology shares similarities with the middle proxy but is usually unavoidable.
The embedded client library topology boasts superior performance and scalability. Nevertheless, it requires implementation in every language and necessitates library upgrades across all services.
The sidecar proxy topology may not match the performance of the embedded client library topology but is free from its limitations.

Global Load Balancing

The evolution of load balancing will see a shift towards treating individual load balancers as standardized commodities. The true innovation and commercial potential will be concentrated in the control plane. Global load balancers will increasingly possess capabilities beyond those of any single load balancer. For instance:
Automatically identifying and rerouting traffic around failures in specific zones.
Implementing global security protocols and routing policies.
Identifying and mitigating irregular traffic patterns, such as DDoS attacks, using machine learning and neural networks.
Offering centralized user interfaces and visualizations for comprehensive understanding and management of the entire distributed system.

To enable global load balancing, the load balancers functioning as the data plane must exhibit advanced dynamic configuration capabilities.

Some conclusions

Load balancers play a crucial role in contemporary distributed systems. They are typically categorized into two classes: L4 and L7. Both L4 and L7 load balancers hold significance in modern architectures. L4 load balancers are evolving towards horizontally scalable distributed consistent hashing solutions, while L7 load balancers are currently experiencing substantial investment, driven by the widespread adoption of dynamic microservice architectures.

The future of load balancing lies in global load balancing and the division between the control plane and the data plane. This is where most of the forthcoming innovation and commercial prospects will emerge.

The industry is rapidly transitioning towards utilizing open-source software (OSS) and commodity hardware for networking solutions.

Conventional load balancing providers will be the first to be replaced by OSS software and cloud vendors.

The RELIANOID solution is suitable for every need, with a Multilayered Application Delivery Controller: Complete Application Delivery Controller solution with the ability to behave as a load balancer and highly available service at different layers (L2, L3, L4, and L7), and delivered on-premise, virtual and cloud.

SHARE ON:

Related Blogs

Posted by reluser | 28 October 2024
The Hypertext Transfer Protocol (HTTP) is the foundation of data communication for the web. HTTP/2, the second major version of the protocol, represents a significant evolution from HTTP/1.1, designed to…
59 LikesComments Off on Understanding HTTP/2 Load Balancing
Posted by reluser | 30 September 2024
Operational Support Systems (OSS) and Business Support Systems (BSS) are vital for the efficient functioning of telecommunications companies, such as mobile, fixed-line, and Internet operators. These systems serve different purposes…
74 LikesComments Off on OSS/BSS reliability for Telecom industry support systems
Posted by reluser | 26 July 2024
The Netdev 0x18 Conference, held from July 15th to 19th, 2024, in Santa Clara, California, brought together leading minds in Linux networking for a week of insightful presentations, technical sessions,…
125 LikesComments Off on Netdev Conference 0x18: A Deep Dive into the Future of Linux Networking