This guide provides instructions for deploying a Resilio Management Console High Availability (HA) cluster on Linux virtual machines running Debian-based distributions.
Prerequisites
- Two Linux VMs.
- Root or sudo privileges on each VM.
- A shared storage solution using NFS.
- Network connectivity between the cluster nodes.
Step 1: Configure NFS for Shared Storage
(OPTIONAL IF NO NFS SERVER EXISTS)
On the NFS server, install the required packages:
sudo apt install nfs-kernel-server -y
Create the shared directory:
sudo mkdir -p /mnt/resilio-mc-storage/
Set permissions:
sudo chown -R nobody:nogroup /mnt/resilio-mc-storage/
sudo chmod 777 /mnt/resilio-mc-storage/
Edit the NFS export file:
sudo nano /etc/exports
Export the NFS share by adding the following line to /etc/exports
:
/mnt/resilio-mc-storage *(rw,sync,no_subtree_check,no_root_squash)
Apply the changes:
sudo exportfs -ra
Start and enable the NFS service:
sudo systemctl restart nfs-kernel-server
sudo systemctl enable nfs-kernel-server
Allow NFS through the firewall
sudo ufw allow nfs
sudo ufw reload
Step 2: Mount NFS Storage on Both VMs
On each VM, install NFS utilities:
sudo apt install nfs-common -y
Create the mount point:
sudo mkdir -p /mnt/resilio-mc-storage
Mount the NFS share:
sudo mount -o nconnect=16 <NFS SERVER IP>:NFSPATH /mnt/resilio-mc-storage
To make the mount persistent, add the following to /etc/fstab
:
<NFS SERVER IP>:NFSPATH /mnt/resilio-mc-storage nfs defaults,nconnect=16 0 0
To apply the changes without rebooting:
sudo mount -a
Step 2: Install Dependencies
Ensure all necessary packages are installed.
Debian-Based Distributions:
sudo apt update -y
sudo apt install -y wget curl tar unzip nano
Step 3: Configure Firewall Rules
Allow traffic on necessary ports:
sudo ufw allow nfs
Step 4: Install Management Console
Install Resilio Management Console on both cluster nodes. For details, see Management Console installation.
Step 5: Configure Resilio Management Console Service
Edit the systemd service file:
sudo nano /lib/systemd/system/resilio-connect-management-console.service
Modify the service to use the NFS mount:
After=network.target remote-fs.target nfs-client.target
Requires=remote-fs.target nfs-client.target
ExecStart=/opt/resilio-connect-management-console/srvctrl run --appdata /mnt/resilio-mc-storage
Example
[Unit]
Description=Resilio Connect Management Console service
Documentation=https://connect.resilio.com
After=network.target remote-fs.target nfs-client.target
Requires=remote-fs.target nfs-client.target
[Service]
Type=simple
User=rslconsole
Group=rslconsole
UMask=0002
Restart=on-failure
TimeoutSec=600
ExecStart=/opt/resilio-connect-management-console/srvctrl run --appdata /mnt/resilio-mc-storage
ExecStop=kill -s SIGTERM $MAINPID
[Install]
WantedBy=multi-user.target
Reload systemd and enable the service:
sudo systemctl daemon-reload
Step 6: Cluster Configuration
Install Pacemaker and Corosync on both vms
sudo apt install pacemaker corosync resource-agents-base resource-agents-extra
sudo systemctl enable --now corosync pacemaker
Configure Firewall ports
sudo ufw allow 2224/tcp # pcsd web service
sudo ufw allow 3121/tcp # Corosync QDevice
sudo ufw allow 5403/udp # Corosync Cluster Communication
sudo ufw allow 21064/tcp # Pacemaker Remote
Configure Cluster Nodes
Edit /etc/corosync/corosync.conf
on the primary VM.
Replace bindnetaddr
with a private IP.
For example, if the local interface is 192.168.5.92 with netmask 255.255.255.0, set bindnetaddr to 192.168.5.0. If the local interface is 192.168.5.92 with netmask 255.255.255.192, set bindnetaddr to 192.168.5.64, and so forth.
If you have multiple interfaces, use the interface you would like corosync to communicate over.
Set private IP for node 2 (second VM).
totem {
version: 2
cluster_name: resilio_mc_cluster
crypto_cipher: none
crypto_hash: none
transport: udpu
bindnetaddr: 10.0.0.0
}
logging {
fileline: off
to_stderr: yes
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
debug: off
logger_subsys {
subsys: QUORUM
debug: off
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
nodelist {
node {
name: Node001
nodeid: 1
ring0_addr: <primary-vm-private-ip>
}
node {
name: Node002
nodeid: 2
ring0_addr: <secondary-vm-private-ip>
}
}
Restart services
sudo systemctl restart corosync
sudo systemctl restart pacemaker
Make sure there is a user for cluster hacluster. It must be automatically preconfigured.
id hacluster
Set a password on both nodes
sudo passwd hacluster
Authenticate the cluster and enter the password when requested on both nodes.
sudo pcs host auth <primary-vm-private-ip> <secondary-vm-private-ip> -u hacluster
Create cluster
sudo pcs cluster setup mccluster <primary-vm-private-ip> <secondary-vm-private-ip>
Start and enable it
sudo pcs cluster start --all
sudo pcs cluster enable --all
Make sure cluster is running and both nodes are online when executed on both VMs
sudo pcs status
Step 7: Configure Cluster Resources
Create cluster resources:
sudo pcs resource create resilio-mc-storage ocf:heartbeat:Filesystem device="<NFSSERVER>:/mnt/resilio-mc-storage" directory="/mnt/resilio-mc-storage" fstype="nfs" op monitor interval=10s on-fail=restart
sudo pcs resource create resilio-mc-app systemd:resilio-connect-management-console op monitor interval=10s on-fail=restart
Create a VIP resource: This shared IP will be passed between the nodes for failover.
This step can be skipped if you want to use a load balancer.
sudo pcs resource create resilio-mc-vip ocf:heartbeat:IPaddr2 ip=10.0.0.10 cidr_netmask=24 op monitor interval=30s
Set dependencies:
sudo pcs resource group add resilio-mc resilio-mc-vip resilio-mc-storage resilio-mc-app
sudo pcs constraint colocation add resilio-mc-app with resilio-mc-storage
sudo pcs constraint order start resilio-mc-storage then resilio-mc-app
Set failover policy:
sudo pcs property set stonith-enabled=false
sudo pcs property set no-quorum-policy=ignore
Verify cluster status:
sudo pcs status
Check cluster logs:
journalctl -xe -u pacemaker
Step 8: Test and Validate Cluster Setup
- Access the Resilio Management Console via
https://<VM-IP>:8443