4.12. Redundant setup#

Two appliances may be setup in a redundant scenario based on the MySQL master-master-replication.

On an existing Appliance A the setup is started and pushed to an fresh Appliance B. After a successful setup the data is synchronized from A to B and vice versa. The communication between the appliances will be encrypted using IPSec.

Note

As the communication between the two redundant partners is done via IPSec, please assure that IPSec between the two partners is possible!

The Appliance Web UI shows the status of the configured redundancy but does not provide a way to setup the redundancy.

4.12.1. Setup redundancy#

Note

During the setup of the redundancy the web server needs to be restarted. So - to make the setup more robust - the setup is started from the command line. So you either have to login to the appliance with ssh or use a console provided by your hypervisor.

Warning

All token data on Appliance B will be lost!

Run the following steps:

  1. Login to your Appliance A and issue the following command:

    appliance_configure.py -c setup_redundancy -p <IP of Appliance B>
    
    Welcome to the redundancy wizard.
    Please make sure that timer_entropyd is running.
    Do you want to setup IPSec-based encryption between the machines? [y/n] y
    You will have to verify the partner's host key fingerprint and give the partner's root password in
    order to start the setup process.
    Since the SSH command only waits for input for a certain time, please make sure that:
     * the partner's host key fingerprint
     * the partner's root password
    are close at hand.
    You can find the partner's host key fingerprint at http://10.76.126.196:8443/ at
    System-Advanced-Redundancy.
    Please hit RETURN to continue:
    

Note

For security reasons you have to ensure that the partner machine is the correct one by validating its SSH fingerprint. This fingerprint can be viewed in the Appliance Web UI of Appliance B. Please assure that you have the root password and the fingerprint of Appliance B available before you begin the procedure.

2. Now you are asked to confirm, that the fingerprint of Appliance B is correct::

The authenticity of host ‘10.76.126.196 (10.76.126.196)’ can’t be established. RSA key fingerprint is 4c:bc:02:9f:e8:27:01:bc:64:c7:6e:0e:bc:cb:5e:1a. Are you sure you want to continue connecting (yes/no)?

Note

If the fingerprints do not match, this is not your Appliance B you are talking to! Please contact your reseller, you might be the victim of a man-in-the-middle attack!

If the fingerprints match, you can enter the root password of Appliance B.

  1. Now all necessary data is transferred from Appliance A to Appliance B via SSH. Some messages are displayed during the transfer. If any error occurs you can check the logfile /var/log/lseappliance/appliance.log.

  2. Finally you will get the message:

    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '10.76.126.196' (RSA) to the list of known hosts.
    root@10.76.126.196's password:
    Success!
    Terminating the SSH tunnel ... (this is expected!)
    

    At this point the redundancy is setup successfully and the communication for the replication is done encrypted via IPSec.

    Congratulations, you are done!

4.12.2. Known Errors#

There is a timeout when entering the root password of Appliance B. Then you might get a message like this:

A SSH error occurred:
---- BEGIN ----
Warning: Permanently added '10.76.126.196' (RSA) to the list of known hosts.
Connection closed by 10.76.126.196
---- END ----
It's possible that this happened due to the SSH server timeout.
Since the redundancy setup process hasn't been started yet, it's safe
to re-run `appliance_configure.py -c setup_redundancy -p 10.76.126.196`.

You can just rerun the setup.

4.12.3. Reverting Redundancy#

If you want to quit using the redundancy, you need to execute this command on both appliances:

appliance_configure.py -c reset_redundancy

Now both appliances will work as a single appliance without knowing the other partner anymore.

4.12.4. Repair an out of sync redundant setup#

If the two appliance do not sync their databases anymore, you have to reinitialize the redundancy.

  1. Deactivate redundancy on both machines

appliance_configure.py -c reset_redundancy

2. Setup the redundancy as described above pushing from the machine with the more recent data.