Info |
---|
The third-party software solutions mentioned here are merely provided as illustrationsprovided for illustration purposes only. This list is not comprehensive. |
Getting the system ready
Note: the two servers involved must follow the hardware sizing recommendations defined in the following section: Hardware Sizing
Storage space
The contents you want to share between the two servers can be shared either on a separate shared storage space such as a SAN (Storage Area Network), or through data replication between two separate storage spaces.
Astuce |
---|
High availability through a replication mechanism Replication-based high availability can cause major issues with access to shared disk resources access which may occur during loss of service. The most typical issue with resource access and with potentially disastrous consequences occurs in a split-brain situation. |
Remarque |
---|
The cyrus-imap component does not support NFS-type based storage. As a result, regardless of the type of replicated storage you choose, you need a block-device-type based storage based on using technologies such as Fibre Channel or iSCSI for the data this component handles (/var/spool/cyrus et and /var/lib/cyrus). |
Data to be made available between both servers
The data located in the following directories must be made visible by both servers and their its access must be managed by the HA handling system.
Multiexcerpt include |
---|
MultiExcerptName | list-bmfiles-varspool |
---|
nopanel | true |
---|
DisableCaching | true |
---|
PageWithExcerpt | Liste des services et donnees BlueMind |
---|
|
The cyrus dababase database located in the following directory must also be added to this data:
- /var/lib/cyrus
- /var/lib/postgresql
Astuce |
---|
This data must therefore be located in a storage space -- SAN storage, GFS cluster, etc – that allows the passive server to access the data during a switchoverswitchovers. |
Avertissement |
---|
REMINDER: /var/spool/cyrus MUST NOT be stored on an NFS server mount. |
Network
To work properly, BlueMind must be accessible through a single URL/IP. We therefore recommend that you use a system that is capable of handling floating (or virtual) IP addressaddresses.
Remarque |
---|
BlueMind's frontend front-end access URL MUST always be the same. |
Monitoring scripts
Multiexcerpt include |
---|
MultiExcerptName | monitoring_scripts |
---|
PageWithExcerpt | Monitoring |
---|
|
Setting Up High Availability
Données et services à gérer
Configuration de BlueMind à synchroniserData and services that need to be managed by HA
High availability-based synchronization of BlueMind configuration files
BlueMind's configuration files that must be synchronized in real time by the HA handling system are listed below.
Multiexcerpt include |
---|
MultiExcerptName | list-bmfiles-etc |
---|
nopanel | true |
---|
DisableCaching | true |
---|
PageWithExcerpt | Liste des services et donnees BlueMind |
---|
|
The following files must also be synchronized:
- /usr/share/bm-elasticsearch/config/elasticsearch.yml
- /etc/aliases
- /etc/aliases.db
- /etc/sysctl.conf
- /etc/ssl/certs/bm_cert.pem
- /var/lib/bm-ca/ca-cert.pem
Astuce |
---|
Here are a few examples of how to synchronize configuration files in real time: - incron, based on inotify, allows you to launch jobs depending on a file's status for example. The official documentation is available on the vendor's website.
- files can be copied by rsync over ssh for example, as shown on this website.
- other tools include lsyncd and csync2
|
Gestion de la mise à jour de BlueMind
Les grandes étapes de la mise à jour d'un déploiement en Haute Disponibilité de BlueMind sont décrites ci-après :
Managing the BlueMind update
The key steps for updating a High Availability-based deployment of BlueMind are described below:
Remarque |
---|
- Before you start the BlueMind update, disable the high availability handling services.
- Update the packages on both servers.
- Next, on the main server with the public IP address only, carry out the post-installation configuration as described in
|
Remarque |
---|
- Avant de lancer la mise à jour de BlueMind, désactiver les services de gestion de la haute disponibilité.
- Mettre à jour les paquets sur les deux serveurs.
- Puis sur le serveur principal uniquement possédant l'adresse IP publique, réaliser la configuration post-installation comme indiqué au paragraphe : Configuration post-installation.
|
STONITH
STONITH, pour which stands for Shoot The Other Node In The Head, est une technique de fencing, ou isolement, dans la gestion de clusters. Le principe est de pouvoir éteindre le serveur défaillant d'un cluster à distance, soit logiciellement, soit directement en lui coupant son alimentation électrique.
Ce type de fonctionnement se situe au niveau de l'infrastructure matérielle.
Info |
---|
Cette sécurité permet de diminuer fortement les risques de corruption de données dans des cas de pertes de service complexes, par exemple comme dans le cas d'une défaillance de type split-brain qui va conduire les deux serveurs à se considérer unique maître et tenter d'accéder en même temps à la ressource de stockage partagée. Dans le cas d'une haute-disponibilité basée sur une réplication de données, les risques de corruption de données sont importants. |
is a fencing or node isolation technique in cluster management. Its purpose is to shut down a server's failed cluster remotely – either through software or by directly cutting off its power supply.
This is done at the hardware infrastructure level.
Info |
---|
This security system strongly lowers the risk of corrupted data in the event of complex losses of services, e.g. in the event of a split-brain failure, which leads both servers to consider themselves the sole master and attempt to access the shared storage resource at the same time. With data replication-based high availability, the risk of data corruption is high. |
This technique can for instance be implemented using IPMI tools (Intelligent Platform Management Interface). IPMI is a specification of server management interface whose implementations include Cette technique peut par exemple être mise en place avec des outils IPMI (Intelligent Platform Management Interface). IPMI est une spécification d'interfaces de gestion de machines, mais il est possible d'en trouver des implémentations, comme par exemple freeIPMI, OpenIPMI, ipmitool, ...
L'implémentation côté matériel peut se faire par un matériel dédié ou simplement par l'utilisation par exemple des cartes iDRAC pour du matériel du constructeur DELLAs far as hardware is concerned, implementation can be made on dedicated hardware or using iDRAC cards for DELL equipment.