Cluster configuration in RELIANOID Community Edition v7

View Categories

Cluster configuration in RELIANOID Community Edition v7

7 min read

Important note: noid-cluster-notify can be also be found as zeninotify. Please refer: https://www.relianoid.com/resources/knowledge-base/enterprise-edition-v8-administration-guide/whats-new-in-relianoid-ee-v8/

What is RELIANOID Load Balancer Cluster Community Edition? #

The high availability service provided by RELIANOID Load Balancer Community Edition is a stateless cluster, automatically included in the default Community Edition version. This service adeptly replicates configuration files across nodes, employing the VRRP protocol to monitor node health, all within a straightforward design. For those seeking a more advanced, feature-rich stateful cluster service, the Enterprise Edition is available.

The following steps outline the installation and configuration process for the RELIANOID Cluster when high availability is essential for your Load Balancer.

Concepts #

Cluster Node #

A cluster node is a single computing device within a cluster, which is a group of interconnected computers or servers that work together to perform tasks as if they were a single system. Each node in a cluster typically has its own processing power, memory, and storage, and they communicate with each other over a network to share resources and coordinate their activities. The cluster nodes consist of instances of load balancers configured to operate within a cluster service.

Floating interfaces #

A floating IP address is an IP address that can be rapidly reassigned from one node in a cluster to another. This is commonly used in high-availability setups where multiple servers or nodes are running identical services, and if one fails, the IP address “floats” to another node so that service can continue uninterrupted.

Heartbeat #

Heartbeat refers to a mechanism used by nodes within the cluster to communicate their status and health to each other. This heartbeat signal indicates that a node is operational and functioning properly. The absence of a heartbeat from a node may indicate a failure or an issue with that node.

Synchronization #

Synchronization of configuration refers to the process of ensuring that the configuration settings across all nodes in the cluster are consistent and up to date. This is essential for maintaining the integrity and proper functioning of the cluster, especially when multiple nodes are involved in providing a service or application.

Failover #

Failover refers to the process of automatically rerouting or redirecting workloads, services, or resources from a failed or unavailable node to a healthy and available node within the cluster.

Cluster Setup #

Requirements #

To begin, install two instances of RELIANOID CE, ensuring that both are running the same version.

Next, verify that the NTP service is correctly configured on both nodes and that they are accessible from the load balancers. Synchronizing the systems time is essential for the proper functioning of the VRRP protocol.

Additionally, to automate synchronization, it is essential to configure SSH login without passwords between load balancers. Set up remote access keys, for instance, utilizing the ssh-copy-id command.

root@noid-ce-01:~# ssh-keygen -t rsa # without passphrase, just press Enter
root@noid-ce-01:~# ssh-copy-id root@noid-ce-02

Then, in the secondary node:

root@noid-ce-02:~# ssh-keygen -t rsa # without passphrase, just press Enter
root@noid-ce-02:~# ssh-copy-id root@noid-ce-01

Configuration #

Upon completing the installation, proceed to configure the cluster service using the following steps.

Edit the configuration file located at /usr/local/relianoid/app/ucarp/etc/cluster.conf . The essential parameters are outlined below:

#interface used for the cluster where is configured local_ip and remote_ip
$interface="eth0";

#local IP to be monitored, i e 192.168.0.101
$local_ip="192.168.101.242";

#remote IP to be monitored, i e 192.168.0.102
$remote_ip="192.168.101.243";

#used password for vrrp protocol communication
$password="secret";

#unique value for vrrp cluster in the network
$cluster_id="1";

#used virtual IP in the cluster, this IP will run always in the master node
$cluster_ip="192.168.101.244";

# if the nic used for cluster is different to eth0 then please change the exclude conf file in following line
########
$exclude="--exclude if_eth0_conf";

Take note that only virtual interfaces undergo replication. If your load balancing services involve multiple NICs or VLANs, they must be excluded in the cluster configuration file. For instance, if eth0 is designated for cluster purposes and vlan100 (eth0.100) for load balancing, the configuration should be adjusted as follows:

$exclude="--exclude if_eth0_conf --exclude if_eth0.100_conf";

Please be aware that the RELIANOID CE cluster is managed by the root user, and it employs rsync via ssh to replicate the configuration from the master node to the backup. To facilitate this, it’s crucial to set up passwordless SSH access between the nodes.

Ensure that the specified $cluster_ip is configured and active on one RELIANOID virtual load balancer, which will be the future master. Once the service is initiated on this node, the configuration file for $cluster_ip will automatically replicate to the backup server.

Start and Stop a Cluster Node #

To activate the cluster service, follow these steps:

1. This step is necessary only for RELIANOID Community Edition 7.1 or earlier versions: You need to set the variable $enable_cluster in the file /etc/init.d/relianoid-ce-cluster to the value:

$enable_cluster="true";

2. The service relianoid-ce-cluster is disabled by default upon boot. Execute the following command to enable relianoid-ce-cluster for automatic activation after a reboot:

[] root@noid-ce-01:~# systemctl enable relianoid-ce-cluster

Keep in mind that any modification made to the configuration file /usr/local/relianoid/app/ucarp/etc/cluster.conf requires a restart of the cluster service. Therefore, after finalizing the configuration parameters, restart the cluster on both nodes using the following steps:

[] root@noid-ce-01:~# /etc/init.d/relianoid-ce-cluster stop
[] root@noid-ce-01:~# /etc/init.d/relianoid-ce-cluster start

Upon the initiation of the cluster service, it’s important to observe that the prompt on the load balancer undergoes modification to display the current status of the cluster for each service:

Master:

[master] root@noid-ce-01:~# 

Backup:

[backup] root@noid-ce-02:~# 

Update Configuration #

After configuring the clustering service, all configuration settings pertaining to virtual services and virtual/floating IPs are automatically replicated across the cluster nodes.

In a stateless cluster, user sessions and connections are not synchronized, necessitating the use of a stateful cluster, which is included in our Enterprise Load Balancer.

Upgrading Cluster Nodes #

Upgrading cluster nodes with a RELIANOID load balancer involves updating each node within the cluster with the latest RELIANOID load balancer software or firmware. It’s advisable to upgrade the backup node before proceeding with the upgrade of the master node during the upgrading process of clustering nodes.

Logs and Troubleshooting the Cluster Service #

1. Passwordless SSH is a prerequisite between both cluster nodes.
2. Both cluster nodes need to have NTP configured.
3. The noid-cluster-notify service will exclusively operate on the master node. Confirm that noid-cluster-notify is running by executing the following command. On the master node, you should receive output similar to this:

[master] root@noid-ce-01:~# ps -ef | grep noid-cluster-notify
root 16912 1 0 03:20 ? 00:00:00 /usr/bin/perl /usr/local/relianoid/bin/noid-cluster-notify.pl

On the backup node, there should be no output related to noid-cluster-notify when running the command.

[backup] root@noid-ce-02:~# ps -ef | grep noid-cluster-notify
[backup] root@noid-ce-02:~#

4. The logs for the ucarp service are directed to the syslog at /var/log/syslog .
5. Logs for the noid-cluster-notify replication service are transmitted to /var/log/noid-cluster-notify.log .
6. The cluster status is visible in the prompt, dynamically updating after each command execution. Furthermore, the cluster status is recorded in the configuration file: /etc/relianoid-ce-cluster.status . If this file is absent, the cluster service will be halted.
7. When the cluster node is promoted to MASTER, the execution of the following script takes place: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-start .
8. When the cluster node is promoted to BACKUP, the execution of the following script occurs: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-stop .
9. When the cluster node requires running advertisements, the execution of the following script takes place: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-advertisement .
10. If there is a need to modify any parameter in the ucarp execution, you can make adjustments within the run_cluster() subroutine in the script /etc/init.d/relianoid-ce-cluster .
11. The cluster service utilizes VRRP implementation, necessitating the allowance of multicast packets in the switches.

SHARE ON:

Powered by BetterDocs