Configure disaster recovery
Use this procedure to set up a disaster recovery configuration with a primary and a mirror instance.
Disaster recovery setup configures periodic backups from the primary to a shared, mirrored storage volume. If the primary cluster fails, then a secondary cluster can take over its operations after a small manual intervention.
Should the production cluster be destroyed, monitoring and alerting notifies the administrator. The administrator can then make the secondary appliance into the new primary, by starting it and recovering from backups generated by the primary.
This system makes it possible for you to restore the last backed up state from the primary to the secondary sever. If you configure daily backups, any metadata or data loaded/created after the last backup is not included in restore.
Both primary and secondary appliances must use a shared storage volume. Both the primary and secondary appliance must have an active running ThoughtSpot cluster. You can use an NAS or Samba volume for your share. If you choose NAS, keep in mind that too slow a volume potentially break backups or significantly slow restore performance. The following are good guidelines for choosing storage:
Provision a dedicated storage volume for periodic backups.
Do not use the backup volume for loading data or any other purposes. If backups fill up this storage, other components will suffer.
To ensure better supportability and continuity in case local hard disks go bad, the shared storage volume should be network based.
Thoughtspot supports shared storage by mounting NFS or CIFS/Samba based volumes.
Before you begin, make sure you know if the shared volume is Samba or NAS volume.
To find out, use the
|Telnet confirms NFS||Telnet confirms Samba|
$ telnet,2049 Trying 192.168.2.216... Connected to 192.168.2.216. Escape character is '^]'.
$ telnet,445 Trying 192.168.2.216... Connected to 192.168.2.216. Escape character is '^]'.
Your shared volume should have a minimum of 15GB free and at least 20GB for a full backup. To configure and mount the shared volume on the primary and mirror appliances, complete the following steps:
SSH into the primary appliance.
Ensure that the primary appliance has a ThoughtSpot cluster up and running.
The primary appliance contains the cluster you are protecting with the recovery plan.
Create a directory to act as your mount_point.
sudo mkdir <mount_point>
Set the directory owner to
sudo chown -R admin:admin <mount_point>
tscli nassubcommand to create a NAS mount on all of the cluster nodes. Run
tscli nas mount-nfsor
tscli nas mount-cifs.
Use the command-line help (
tscli nas -h) or the documentation to view all the nas subcommand options. Below are some samples to help you:
Example invocations Samba share:
tscli nas mount-cifs --server 192.168.4.216 --path_on_server /bigstore_share --mount_point /mnt --username admin --password sambashare --uid 1001 --gid 1001
Samba share with Windows AD authentication
tscli nas mount-cifs --server 172.27.1.75 --path_on_server /elc --mount_point /home/admin/etl/external_datadir --username COMPANYCO/thoughtspot_svc --password 'ts123PDI!' --uid 1001 --gid 1001
tscli nas mount-nfs --server 192.168.4.132 --path_on_server /data/toolchain --mount_point /mnt
Log into the target machine.
Ensure that the target machine is running a ThoughtSpot cluster. Note that the clusters on the primary and target machines do not need to be on the same ThoughtSpot version.
If a cluster is not running on the target machine, contact ThoughtSpot Support to create a cluster.
Repeat steps 3-5 on the target machine.
The target machine and the primary machine should both be accessing the shared volume. The configuration of the mount point should be identical on both machines.
Test the configuration by creating a file as the
Return to the primary server and make sure you can edit the file.
If you haven’t already done so, SSH into the primary server.
tscli backup-policy createcommand.
The command opens a
vieditor for you to configure the backup policy. Make sure your policy points to the NAS mount in the primary appliance.
When choosing times and frequencies for periodic backups, you should choose a reasonable frequency. Do not schedule backups too close together, since a backup cannot start when another backup is still running. Avoid backing up when the system is experiencing a heavy load, such as peak usage or a large data load.
If you are unfamiliar with the policy format, see Configure periodic backups.
Write and save the file to store your configuration.
By default, newly created policies are automatically enabled.
Verify the policy using the
tscli backup periodic-config <name>command.
<name>from the policy you created in the previous step.
SSH into the secondary recovery appliance.
tscli dr-mirrorsubcommand to start the mirror cluster.
tscli dr-mirror start
Verify that the cluster has started running in mirror mode
tscli dr-mirror status
It may take some time for the cluster to begin acting as a mirror.
If the primary cluster fails, the secondary cluster can take over its operations after a small manual intervention. The manual procedure makes the secondary instance into the primary.
|You should perform this procedure under the supervision of ThoughtSpot customer support.|
Contact ThoughtSpot customer support.
If the primary ThoughtSpot cluster is still running, stop it and disconnect it from the network.
SSH into the secondary cluster.
Stop the mirror cluster.
tscli dr-mirror stop
Verify the mirror has stopped.
tscli dr-mirror status
Start the new primary cluster.
tscli cluster start
Deploy a new mirror.
Set up a backup policy on your new primary cluster.