The third-party software solutions mentioned here are provided for illustration purposes only. This list is not comprehensive.
Getting the system ready
Note: the two servers involved must follow the hardware sizing recommendations defined in the following section: Hardware Sizing
The contents you want to share between the two servers can be shared either on a separate shared storage space such as a SAN (Storage Area Network), or through data replication between two separate storage spaces.
Replication-based high availability can cause major issues with access to shared disk resources which may occur during loss of service. The most typical issue with resource access and with potentially disastrous consequences occurs in a split-brain situation.
Data to be made available between both servers
The data located in the following directories must be made visible by both servers and its access must be managed by the HA handling system:
The cyrus database located in the following directory must also be added to this data:
REMINDER: /var/spool/cyrus MUST NOT be stored on an NFS mount.
To work properly, BlueMind must be accessible through a single URL/IP. We therefore recommend that you use a system that is capable of handling floating (or virtual) IP addresses.
Please, see dedicated page Monitoring
Setting Up High Availability
Data and services that need to be managed by HA
High availability-based synchronization of BlueMind configuration files
BlueMind's configuration files that must be synchronized in real time by the HA handling system are located under /etc
The following files must also be synchronized:
Here are a few examples of how to synchronize configuration files in real time:
Managing the BlueMind update
The key steps for updating a High Availability-based deployment of BlueMind are described below:
- Before you start the BlueMind update, disable the high availability handling services.
- Update the packages on both servers.
- Next, on the main server with the public IP address only, carry out the post-installation configuration as described in: Post-installation Configuration.
STONITH, which stands for Shoot The Other Node In The Head, is a fencing or node isolation technique in cluster management. Its purpose is to shut down a server's failed cluster remotely – either through software or by directly cutting off its power supply.
This is done at the hardware infrastructure level.
This technique can for instance be implemented using IPMI tools (Intelligent Platform Management Interface). IPMI is a specification of server management interface whose implementations include freeIPMI, OpenIPMI, ipmitool, ...
As far as hardware is concerned, implementation can be made on dedicated hardware or using iDRAC cards for DELL equipment.