How to configure a cluster in RELIANOID Community Edition v.5.0 and V.5.9

How to configure a cluster in RELIANOID Community Edition v.5.0 and V.5.9

Important note: noid-cluster-notify can be also be found as zeninotify. Please refer: https://www.relianoid.com/resources/knowledge-base/enterprise-edition-v8-administration-guide/whats-new-in-relianoid-ee-v8/

RELIANOID Cluster Service can be configured like an independent piece of software outside of RELIANOID CE core package, this new RELIANOID cluster service has been developed with the idea of being easily managed and modified by sysadmins in order to adapt it to the needs of any network architecture.

The next procedure describes how to install and configure RELIANOID Cluster in case of High availability for your Load Balancer is required.

Configure our official APT repository as follows:

https://www.relianoid.com/knowledge-base/howtos/configure-apt-repository-relianoid-community-edition/

Install RELIANOID CE cluster package #

Once the local database repository is updated please search the cluster package relianoid-ce-cluster as follows:

root@lb1 > apt-cache search relianoid-ce-cluster
relianoid-ce-cluster - RELIANOID Load Balancer Community Edition Cluster Service

root@lb1 > apt-cache show relianoid-ce-cluster
Package: relianoid-ce-cluster
Version: 1.2
Maintainer: RELIANOID <admin@relianoid.com>
Architecture: i386
Depends: relianoid (>=5.0), liblinux-inotify2-perl, ntp
Priority: optional
Section: admin
Filename: pool/main/z/relianoid-ce-cluster/relianoid-ce-cluster_1.0_i386.deb
Size: 43350
SHA256: e39bb9b8283904db2873287147c885637178e179be5dee67b2c7044039899f35
SHA1: 425d742cde523c93a55b25e96447a8088663a028
MD5sum: 123abcf0eab334a18054802962287dc7
Description: RELIANOID Load Balancer Community Edition Cluster Service
Cluster service for RELIANOID CE, based in ucarp for vrrp implementation and noid-cluster-notify for configuration replication. VRRP through UDP is supported in this version.
Description-md5: 5b668a78c0d00cdf89ac66c47b44ba28

root@lb1 > apt-get install relianoid-ce-cluster
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  liblinux-inotify2-perl
Suggested packages:
  iwatch
The following NEW packages will be installed:
  liblinux-inotify2-perl relianoid-ce-cluster
0 upgraded, 2 newly installed, 0 to remove and 37 not upgraded.
Need to get 43.4 kB/61.4 kB of archives.
After this operation, 60.4 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://repo.relianoid.com/ce/v5 stretch/main i386 relianoid-ce-cluster i386 1.0 [43.4 kB]
Fetched 43.4 kB in 0s (57.3 kB/s)        
Selecting previously unselected package liblinux-inotify2-perl.
(Reading database ... 57851 files and directories currently installed.)
Preparing to unpack .../liblinux-inotify2-perl_1%3a1.22-3_i386.deb ...
Unpacking liblinux-inotify2-perl (1:1.22-3) ...
Selecting previously unselected package relianoid-ce-cluster.
Preparing to unpack .../relianoid-ce-cluster_1.0_i386.deb ...
Unpacking relianoid-ce-cluster (1.0) ...
Setting up liblinux-inotify2-perl (1:1.22-3) ...
Processing triggers for systemd (232-25+deb9u1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up relianoid-ce-cluster (1.0) ...
Completing the RELIANOID CE Cluster installation...

Notice that RELIANOID CE Cluster use VRRP and the synchronization time is mandatory for this protocol, so ensure your NTP service is properly configured and NTP servers are reachable from the Load Balancer.

Configure RELIANOID CE cluster package #

Once the installation is concluded, please configure the cluster service as follows:

Open the configuration file in the path /usr/local/relianoid/app/ucarp/etc/relianoid-cluster.conf

The most important parameters are described next:

#interface used for the cluster where is configured local_ip and remote_ip
$interface="eth0";

#local IP to be monitored, i e 192.168.0.101
$local_ip="192.168.101.242";

#remote IP to be monitored, i e 192.168.0.102
$remote_ip="192.168.101.243";

#used password for vrrp protocol communication
$password="secret";

#unique value for vrrp cluster in the network
$cluster_id="1";

#used virtual IP in the cluster, this IP will run always in the master node
$cluster_ip="192.168.101.244";

# if the nic used for cluster is different to eth0 then please change the exclude conf file in following line
########
$exclude="--exclude if_eth0_conf";

Notice that only virtual interfaces are replicated, so if you are running with more than one NIC or VLAN then they have to be excluded in the cluster configuration file, for example, eth0 is used for cluster purpose and vlan100 (eth0.100) for load balancing purpose, then:

$exclude="--exclude if_eth0_conf --exclude if_eth0.100_conf";

Notice that RELIANOID cluster is managed by root user and it replicates the configuration from master node to backup through rsync (ssh) so ssh without password between nodes need to be configured.

Notice that the defined $cluster_ip has to be configured and UP in one RELIANOID virtual load balancer, the future Master, as soon the service is started in this node the configuration file for $cluster_ip will be replicated to backup server automatically.

Now enable the cluster service with the following two steps:

First open the file /etc/init.d/relianoid-ce-cluster and change the following variable:

$enable_cluster="true";

Secondly, the service relianoid-ce-cluster is disabled by default after boot, please execute the following command to enable relianoid-ce-cluster after reboot:

[] root@lb1 > systemctl enable relianoid-ce-cluster

Take into account that any change in the configuration file /usr/local/relianoid/app/ucarp/etc/relianoid-cluster.conf requires a restart of the cluster service, so once the configuration parameters are done please restart the cluster in both nodes as follows:

[] root@lb1 > /etc/init.d/relianoid-ce-cluster stop
[] root@lb1 > /etc/init.d/relianoid-ce-cluster start

Notice that as soon the cluster service is running the prompt in the load balancer is modified in order to show the cluster status in each service:
Master:

[master] root@lb1>

Backup:

[backup] root@lb2>

Logs and troubleshootings #

  1. SSH without password is required between both cluster nodes
  2. ntp is required to be configured in both cluster nodes
  3. noid-cluster-notify service only will run in the master node, please confirm noid-cluster-notify is running with the following command:You should get something like this in the master node:
    [master] root@lb1> ps -ef | grep noid-cluster-notify
    root 16912 1 0 03:20 ? 00:00:00 /usr/bin/perl /usr/local/relianoid/bin/noid-cluster-notify.pl
    

    And you should see nothing related to noid-cluster-notify in backup node.

    [backup] root@lb2> ps -ef | grep noid-cluster-notify
    [backup] root@lb2>
    

     

  4. Logs for ucarp service are sent to syslog /var/log/syslog
  5. Logs for noid-cluster-notify replication service are sent to /var/log/noid-cluster-notify.log
  6. Cluster status is shown in the prompt and it is updated after any command execution, additionally the cluster status is saved in config file: /etc/relianoid-ce-cluster.status, if this file doesn’t exist then cluster service is stopped.
  7. At the moment of the cluster node promotes to MASTER the following script is executed: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-start
  8. At the moment of the cluster node promotes to BACKUP the following script is executed: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-stop
  9. At the moment of the cluster node needs to run advertisements the following script is executed: /usr/local/relianoid/app/ucarp/sbin/relianoid-ce-cluster-advertisement
  10. In case you need to change any parameter in the ucarp execution you can modify the execution function for ucarp in the script /etc/init.d/relianoid-ce-cluster subrutine run_cluster()
  11. Cluster service uses VRRP implementation, so multicast packages need to be allowed in the switches
SHARE ON:

Powered by BetterDocs