System | Cluster

View Categories

System | Cluster

4 min read

This section allows you to manage the clustering service, which ensures high availability for load balancing through two collaborative nodes in active-passive mode.

A cluster consists of 2 nodes working together to maintain service availability and avoid downtime from the client’s perspective. Typically, these nodes assume master and backup roles in active-passive mode. The master node manages traffic to the backends and handles all client connections. The backup node continuously updates its configuration in real-time and can take over if the master node becomes unresponsive.

Requirements for Creating a Cluster #

Both nodes must run the same Relianoid version (i.e., same appliance model).
Each node should have a unique hostname.
Both nodes should have identical NIC names (network interfaces).
Configuration changes must be made only on the master node, never on the backup node.
Intermediate switching and routing devices may need configuration to prevent conflicts with cluster switching.
Setting a floating IP is recommended to avoid service downtime during a cluster switch.

When load balancing services switch from one node to another, the backup node manages current connections and service status to prevent client interruptions.

Cluster Service components #

This is the main page for configuring the Cluster. The clustering service includes several components:

Synchronization. Automatically synchronizes configurations from the master to the backup node using inotify and rsync through SSH. Every change applied in the filesystem in the master node that is related to the virtual services or common configuration for the virtual services will be automatically replicated to the backup node.
Heartbeat. Monitors the health of cluster nodes using the VRRP protocol over multicast, facilitated by keepalive.
Connection Tracking. Replicates connection states in real-time to allow seamless failover without disrupting client or backend connections, using the conntrackd service.
Command Replication. Sends and activates configurations from the master to the backup node through zclustermanager via SSH.

The node where the Cluster is configured becomes the master node. Warning: Any previous configuration on the backup node will be erased, resulting in the loss of any Farms (including certificates), Virtual Interfaces, IPDS rules, etc.

Data Replicated in a Cluster #

In a clustering environment, data replication ensures that the configuration and state of the services are consistent between the master and backup nodes. Below are the specific elements that are replicated across the nodes:

  • LSLB (Local Server Load Balancer), GSLB (Global Server Load Balancer), DSLB (Data Server Load Balancer) Services. These services are critical for load balancing and are replicated to ensure that failover does not disrupt traffic distribution, including configuration, sessions and traffic states.
  • Static Routing. Routes defined statically are replicated to maintain consistent network paths.
  • RBAC (Role-Based Access Control) Settings. User roles, permissions, and related settings are replicated to ensure security policies are consistent across nodes.
  • Users, Groups, and Permissions. User accounts, group memberships, and associated permissions are synchronized.
  • SSL Certificates and Let’s Encrypt Configuration. SSL certificates, including those managed by Let’s Encrypt, are replicated to secure communications and services.
  • Virtual, VLANs, Bonding, and Floating Interfaces. Network interfaces configurations such as virtual interfaces, VLANs, bonded interfaces, and floating IPs are replicated.
  • VPN Services. VPN configurations and states are synchronized to maintain secure connections.
  • IPDS (Intrusion Prevention and Detection System) Rules and Files. IPDS rules and associated files are replicated for consistent security monitoring.
  • Farmguardians Configuration. Monitoring and health check configurations for farms (groups of servers) are replicated.
  • Notification Settings. Settings for system notifications are synchronized to ensure alerts are consistent.

Non-Replicated Data in a Cluster #

  • Physical NIC Interfaces. Physical network interface cards (NICs) configurations are not replicated because they are hardware-specific.
  • Default Gateways. Default gateway settings are local to each node and are not replicated.
  • Local and Remote Services Configuration. Services such as DNS, NTP (Network Time Protocol), SNMP (Simple Network Management Protocol) are configured locally and not replicated.
  • API Keys. API keys used for accessing services are not replicated for security reasons.
  • Activation Certificates. Activation certificates for software licensing are not replicated.
  • Installed Packages. Software packages installed on the nodes are not synchronized, as they can vary between nodes.
  • Backups. Generated backups are specific to each node and, therefore, are not replicated across nodes.

Configure Cluster Service #

relianoid load balancer v8 cluster configuration

Local IP. Select from available network interfaces for cluster management (no virtual interfaces allowed).
Remote IP. IP address of the future backup node.
Remote node Password. Root password for the remote (future backup) node.
Confirm remote node Password. Root password for the remote (future backup) node.

After entering the necessary parameters, click the Apply button.

Show Cluster Service #

If the cluster service is configured and active, it displays the following information about services, backends, and actions:

relianoid load balancer v8 cluster list

Interface. Network interface where cluster services are configured.
Failback. Option to return load balancing services to the master when it becomes available again.
Check Interval. Time interval for heartbeat checks between nodes.
Tracked interfaces. Active network interfaces being monitored in real-time.

Cluster Actions #

Show Nodes. Displays the status of nodes.
Edit. Modify configuration settings.
Destroy. Remove configuration settings and a node.

The Show Nodes action displays a table with:

relianoid load balancer v8 cluster show nodes maintenance

Node. Indicates if the node is local or remote according to the node where you’ve been connected to the web GUI. Local will be the node that you’re currently connected and remote is the remote node.
Role. Shows if the node is master (currently serving load balancing services), backup, or maintenance if it is a temporarily disabled node.
IP. IP address of each node.
Hostname. Hostname of each node.
Status. Node status, indicated by:

  • Red. Failure.
  • Grey. Unreachable.
  • Orange. Maintenance mode.
  • Green. Operational.

Message. Debug messages from each node.
Actions. Options for each node include.

  • Enable Maintenance. Temporarily disables a node for maintenance to avoid failover.
  • Disable Maintenance. Enables the node again.

Cluster Settings #

Global setting options available:

relianoid load balancer cluster edit settings

Failback. Select which load balancer is preferred as the master.
Track Interfaces to monitor. Monitor specific interfaces (e.g., LAN or VLAN).
Check Interval. Time between health checks from the backup node to the master.

relianoid load balancer cluster edit options failover interface timeout

Click the Apply button to save the changes.

Cluster maintenance and updates #

To perform maintenance or update a Relianoid cluster with minimal downtime, please refer to the detailed guide Update RELIANOID Cluster with minimum downtime.

SHARE ON:

Powered by BetterDocs