RELIANOID Load Balancer Enterprise Edition architecture internals in user and kernel space

View Categories

RELIANOID Load Balancer Enterprise Edition architecture internals in user and kernel space

4 min read

Overview #

The aim of the following article is to provide an architectural overview of RELIANOID Load Balancer software internals targeted to system administrators and software developers with interests to know more about how RELIANOID ADC software works. All this information could be used as well to help with the configuration of production systems or troubleshooting purposes.

RELIANOID architecture #

RELIANOID manages processes from both user and kernel space allowing to gather the most performance but with the most flexibility as well to perform all the tasks delegated to the application delivery controller such as load balancing, security, and high availability.

The diagram below gives a global view of the different components that are composing the RELIANOID system internally. Additional pieces of less importance have been missed to offer a simpler and clear view.

The following sections will be described the different pieces and how they are interconnected.

RELIANOID Load Balancer in User Space #

The subsystems used in User Space are:

Web GUI: web graphic user interface used by users to manage the configuration and administration of the whole system, it is managed by an HTTPS webserver which consumes the RELIANOID API for all the actions done to the load balancer.

RELIANOID API: or RELIANOID Application program interface, designed following the REST and JSON interfaces, consumed through HTTPS, it is used by other different user interfaces from the user point of views such as the web GUI interface or ZCLI (RELIANOID command-line interface). This tool checks any action against the RBAC subsystem and if it is allowed the action is taken in the RELIANOID Appliance. The API is able to connect and manage any other userspace subsystem described in the diagram.

RBAC: Role-based access control is an access and control mechanism defined around users, groups, and roles. This module defines what actions a user is allowed to do with a high level of configuration between groups, users and roles. It is totally integrated into the web GUI interface that permits to load the web views based on the user role. Additionally, this subsystem is consumed through the API or any other tool that uses the API.

LSLB – HTTP(S): The LSLB module (Local Service Load Balancer) composed by HTTP(S) profile is executed in user space by a reverse proxy called Zproxy which is able to manage high throughput applications very efficiently. This subsystem is configured by the API and can be protected by the IPDS subsystem (using BlackLists, DoS rules, RBL and WAF rulesets).

GSLB: The GSLB module (Global Service Load Balancer) implemented with a GSLB profile instance is executed in user space by a DNS server process called Gdnsd that is able to work as an advanced DNS Nameserver with load balancing features. This subsystem is configured by the API and can be protected by the IPDS subsystem (using BlackLists, DoS and RBL).

Health Checks: This subsystem is configured by the API and used by all the load balancer modules (LSLB, GSLB, and DSLB) to check the health of the backends. Simple and advanced checks are executed against the backend and then if the check fails the backend for the given farm is marked as down and no more traffic is forwarded until the check works again against the backend. The Farm Guardian is responsible for these checks and it is designed with a high level of flexibility and configurability.

Configuration File System: This directory is used for configuration saving purposes, any change in this directory will be replicated to the cluster, if such service is enabled.

Nftlb: This userspace process is managed by the API subsystem and used for two main purposes: LSLB – L4XNAT management and configuration of the IPDS subsystem module.

RELIANOID Load Balancer in Kernel Space #

The subsystems used in Kernel Space are:

Netfilter System LSLB L4xNAT: The Netfilter subsystem is used by Nftlb for load balancing purposes. Netfilter rules are loaded in the kernel by this Nftlb process in order to build a high performance L4 load balancer. Nftlb loads the load balancer rules in the kernel in an efficient way to manage the traffic packets as optimal as possible. Additionally, Nftlb will load Netfilter rules for intrusion prevention and protection (BlackLists, RBL, and DoS).

IPDS BlackLists: This subsystem is integrated into the Netfilter System and managed by Nftlb. It is composed of a group of rules configured before the load balancer rules in order to drop connections for the given origin IPs. Internally it creates a set of rules ordered by category, country, types of attacker, etc and updated on a daily basis.

IPDS RBL: Analogously than the previous one, this subsystem is integrated in Netfilter too and managed by Nftlb. The origin IP is captured before the connection establishment and the client IP is validated against an external DNS service. If the IP is resolved then the IP is marked as malicious and the connection will be dropped.

IPDS DoS: The same configuration system as the two previous modules, integrated into Netfilter and managed by Nftlb. It is a set of rules configured before the load balance rules that check if the packets are part of a Denial of Service attack. Some rules are applied to the packet flow to intercept the attack before to be done.

Connection tracking system: This system is used by the Netfilter subsystem for connection management purposes, network translation and for the statistics module, as well as the health check subsystem in order to force connection actions at the moment of an issue is detected in the backend. The connection tracking system is also used by the Clustering service in order to forward the connection status to the second node of the cluster, in case a cluster master node fails then the second node is able to manage the traffic in the same connection status than the previous master.

Routing System and DSLB: These subsystems are managed by the API and configured in the Kernel space. The routing subsystem is built with iproute2 which allows us to manage multiple routing tables in order to avoid maintaining a complex ruleset for static routing, additionally, thanks to iproute2 the DSLB (Datalink Service Load Balancer) module is created to provide load balancing of uplinks with several gateways.

At the moment of writing this article, RELIANOID 6 is in production, so those subsystems could evolve in future versions to offer better performance or more features.

Additional documentation #

RELIANOID zproxy benchmarks, LSLB -HTTP(S) profile
RELIANOID nftlb benchmarks, LSLB – L4xNAT profile

SHARE ON:

Powered by BetterDocs