These steps complete the configuration of replication between two peers. Further steps are needed to prepare for smooth failover or to perform failover. See Deploying a Failed-Over Replication Peer as a Working Cluster.
Follow this workflow to configure async replication between one cluster and another cluster:
-
Configuring Replication VIP Pools. A dedicated VIP pool must be created for replication on each of the peer clusters. The VIP pool role must be set to replication. You can use this VIP pool to control which CNodes are used to replicate data between the peers, although this is not mandatory.
-
If you want to configure replication in secure mode, with mTLS encryption, make sure that mTLS certificates are installed on both participating clusters.
-
Creating a Replication Peer. This is the configuration of replication to another cluster. The peer configuration is mirrored on the destination peer.
-
Creating Protection Policies for Async Replication. This is a policy governing the schedule and retention parameters for replicating data to the configured peer.
-
Creating a Protected Path. This defines a data path on the cluster to replicate, the destination path on the peer and the protection policy. You can create multiple protected paths using the same protection policy and replication peer. On the remote peer, you can also set up multiple protected paths with the local peer as the destination. In other words, replication can be set up in both directions between a pair of peers.
Note
This feature is supported for a group of clusters all of which must be running VAST Cluster 4.7.
Configuring group replication involves creating peer relationships and replication streams between every peer in the group and every other peer in the group. Streams between destination peers have a standby role. They do not replicate until and unless one of the peers in the stream becomes the source peer in the group replication relationship.
Follow the steps below to complete the configuration of a replication group, with a source peer replicating to multiple destination peers and standby protected paths between every destination peer and every other destination peer in the group.
Further steps are needed to prepare for smooth failover or to perform failover. See Deploying a Failed-Over Replication Peer as a Working Cluster.
Follow this workflow to configure group async replication between a group of peers:
-
Configure a VIP pool for replication on each of the clusters. A dedicated VIP pool must be created for replication on each of the peer clusters. The VIP pool role must be set to replication. You can use this VIP pool to control which CNodes are used to replicate data between the peers, although this is not mandatory.
-
If you want to configure replication in secure mode, with mTLS encryption, make sure that mTLS certificates are installed on every participating cluster.
-
On the primary cluster:
-
Create a replication peer for each of the other clusters in the group.
-
Create a protection policy for each replication peer.
-
Create a protected path, specifying the local path that you want to replicate. When you create the protected path, add one replication stream to replicate the local path to a remote path on one of the peers.
-
After saving the protected path, edit the path to add another replication stream for another peer. Repeat editing and adding a replication stream until the protected path has a replication stream for each remote peer.
-
-
On one of the other clusters in the group:
-
Create a replication peer for each of the other destination peers.
-
Create a protection policy for replicating to each of the other destination peers.
-
Open the group's protected path to edit it. In the Replication streams area of the dialog, verify that each of the other destination peers appears with the status Waiting for a standby stream and add a replication stream for each destination peer, by specifying the protection policies created in the previous step. Each of the new replication streams will have the role standby.
-
-
Repeat creation of replication peers, protection policies and replication streams on destination peers as needed until all destination peers have standby replication streams to all other destination peers.
To add another peer to an existing replication group:
-
On the primary cluster:
-
On the new member peer:
-
Create a replication VIP pool if needed.
-
Create a replication peer and protection policy for each of the other destination peers.
-
Open the group's protected path to edit it. In the Replication streams area of the dialog, verify that each of the other destination peers appears with the status Waiting for a standby stream and add a replication stream for each destination peer, by specifying the protection policies created in the previous step. Each of the new replication streams will have the role standby.
-
A replication VIP pool must be configured on each cluster that will participate in async replication.
A replication VIP pool is used exclusively for routing replication traffic between the peers and not for serving data to clients. The CNodes that are assigned VIPs from the replication VIP pool are used to communicate directly with the remote peer, while other CNodes can communicate only indirectly with the remote peer.
When you configure a replication VIP pool, you can optionally restrict it to specific CNodes.
On each replication peer, create a VIP pool dedicated to replication to create a replication VIP pool as follows:
-
Set the VIP pool's role to Replication.
-
You can configure multiple non-consecutive VIP ranges in a replication VIP pool.
-
Do not specify a domain name.
-
You can dedicate one or more CNodes to the replication VIP pool.
-
You can tag the replication VIP pool with a VLAN.
VAST Cluster supports securing of the replication connection with mutual TLS (mTLS) encryption, in which each replication peer cluster authenticates the other side. mTLS encryption requires certificates installed on each of the peer clusters and is used for replication peer configurations that have secure mode enabled.
To configure mTLS encryption, do the following:
-
Install mTLS Certificates on each Participating VAST Cluster
-
When you create a replication peer configuration, set the secure mode setting to Secure.
Obtain an RSA type TLS certificate from a Certification Authority (CA) for each of the peers in the replication peer configuration. This will consist of a certificate file and a private key file. Obtain the files in PEM format.
Obtain a copy of the CA's root certificate, which will be used to make sure each peer can trust certificates presented by other peers. This should be the same root certificate for each peer.
-
From the left navigation menu, select Settings and then Certificates to open the Certificates tab.
-
From the Certificate for dropdown, select replication.
-
Either paste the certificate file contents into the Certificate field or use the Upload button to upload the file, and paste or upload the key file content into the Key field and the root Certificate file contents in the Root Certificate field.
When pasting the file content, include the BEGIN CERTIFICATE / BEGIN PRIVATE KEY and END CERTIFICATE / END PRIVATE KEY lines, like this:
-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----
-
Click Update.
To install the certificates using the VAST CLI, use the cluster modify command with the following parameters: --cluster-certificate
, --cluster-private-key
--root-certificate
.
This step involves establishing a connection to a remote cluster that will be the destination peer. The replication peer configuration is mirrored on the remote cluster as well.
-
From the left navigation menu, select Data Protection and then Replication Peers.
-
Click Create Peer.
-
Complete the fields:
Field
Description
Peer Name
Enter a name for the peer configuration. The peer configuration will be mirrored on the remote cluster and have the same name on both clusters.
For example: VASTmain-VASTbackup
Remote VIP
Enter any one of the VIPs belonging to the replication VIP Pool to use as the leading remote VIP.
The remote VIP is used to establish an initial connection between the peers. Once the connection is established, the peers share their external network topology and form multiple connections between the VIPs.
If the remote peer's replication VIP pool is changed after the initial peer configuration, the new VIPs are learned automatically if the new range of IPs in the modified VIP pool intersects with the previous IP range. However, if the new IP range does not intersect with the old range, the remote VIP must be modified on the local peer.
Local VIP Pool
From the drop-down, select the replication VIP Pool configured on the local cluster.
For example: vippool_rep
Secure Mode
Select a secure mode for the peer:
-
Secure. Replication to this peer will be encrypted over the wire with mTLS.
Secure mode requires a certificate, key and root certificate to be uploaded to VMS for mTLS encryption.
-
None. Replication to this peer will not be encrypted over the wire.
Caution
This setting cannot be changed after creating the replication peer.
-
-
Click Create.
The replication peer is created and mirrored to the remote cluster. The details are displayed in the Replication Peers page on both the local cluster and the remote cluster.
To create a replication peer via the VAST CLI, run replicationpeer create.
For example:
vcli: admin> replicationpeer create --name vastnativebackup --remote-leading-vip 198.51.100.200 --local-vip-pool-id 3
This step creates a protection policy for scheduling snapshots on a cluster and transferring them to a remote replication peer. Optionally, the policy can specify to retain the snapshots on the local cluster as well as transferring them. The protection policy is mirrored to the replication peer where it can be used for replicating in the reverse direction in the event of a failover.
-
From the left navigation menu, select Data Protection and then Protection Policies.
-
Click + Create Protection Policy.
-
In the Add Protection Policy dialog, complete the fields:
-
If you want to make the protection policy indestructible, enable the Indestructible setting. This setting protects the policy and its snapshots from accidental or malicious deletion. For more information about indestructibility, see Keeping Indestructible Backups.
-
Set up one or more replication schedules:
Note
If you want to set up multiple schedules, click the Add Schedule button to display more scheduling fields in the dialog.
-
Configure local snapshot retention:
-
If you want to retain local snapshots, set the Keep local copy for period. This is the amount of time for which local snapshots are retained on the local cluster.
Select a time unit from the Period dropdown and enter the number of time units in the Keep local copy for field.
-
If you do not want to keep local snapshots, leave the Keep local copy for field blank. Snapshots will be deleted immediately after they are replicated to the destination peer.
-
-
Set the Keep remote copy for period. This is the amount of time restore points are retained on the replication peer.
Select a time unit from the Period dropdown and enter the number of time units in the Keep remote copy for field.
-
Click Create.
The protection policy is created and listed in the Protection Policies page.
To create a protection policy via the VAST CLI, use the protectionpolicy create command.
For example:
vcli: admin> protectionpolicy create --schedule every 90m start at 2025-07-27 20:10:35 keep-local 10h keep-remote 10d --prefix Snapdir1 --clone-type CLOUD_REPLICATION --name protect-pol1 --peer-id 1
When you have defined a protection policy for async replication to a remote peer, you can define a protected path to start replicating data from a local path.
Important
Limitations:
-
Data cannot be moved into or out of a path that is protected by either async replication or S3 replication. This applies to moving files or directories from a protected path to a non-protected path, from a non-protected path to a protected path or from one protected path to another protected path.
-
Protected paths with async replication cannot be nested.
-
In the left navigation menu, select Data Protection and then Protected Paths.
-
On the Protected Paths tab, click + Create Protected Path.
-
In the Add Protected Path dialog, complete the fields:
Field
Description
Name
Enter a name for the protected path.
Local Path
The path you want to back up. A snapshot of this directory will be taken periodically according to the protection policy.
Note
-
If you specify '/' (the root directory), this includes data written via S3.
-
To specify a path to a specific S3 bucket with name bucket, enter /bucket.
Protection policy
Select a protection policy from the dropdown.
Warning
After creating a replication stream, it is not possible to change which policy is associated with the replication stream. All changes to a streams's snapshot schedule, replication schedule, and snapshot expiration must be done by modifying the protection policy. Those modifications affect all replication streams that use the same protection policy. To work around this limitation, create only one replication stream per protected path.
(Remote peer)
This field is filled automatically with the remote peer specified in the protection policy.
Remote path
Specify the directory on the remote peer where the data should be replicated. This must be a directory that does not yet exist on the remote peer.
Tip
You cannot use "/" as remote path because that always exists already. Therefore if you would like to replicate all data under the root directory, you will need to replicate this to a subdirectory. e.g. path on peer = "mirror/"
Remote tenant
This field appears only if the remote peer has more than one tenant. If it appears, select a tenant on the remote peer from the dropdown. The remote path will be created on the selected tenant.
-
-
Click Create.
The protected path is created and listed in the Protected Paths tab.
Note
If the remote peer is running an earlier version of VAST Cluster, no further replication streams may be added to the protected path. If the remote peer is running VAST Cluster 4.7, you can add additional replication streams to the protected path.
Use the protectedpath create command.
For example:
vcli: admin> protectedpath create --name backupthisdir --protection-policy-id 1 --source-dir / --target-exported-dir /backup
If you are configuring group replication, you need to add multiple replication streams to the protected path on the primary cluster. When you first create the protected path, you can add one replication stream. To add each additional replication stream, you need to edit the protected path and then add one stream.
Note
Group members must all be running VAST Cluster 4.7.
-
In the Protected paths page, click
to open the Actions menu for the protected path and select Edit.
-
Select a Sync Point Guarantee using the dropdowns provided. This ensures a minimal duration since the last sync point between the destination peers in the group. A sync point is a snapshot that is shared between the peers in the replication group.
-
In the Update Protected Path dialog, under Add Replication Stream, enter the following:
Protection policy
Select the protection policy that is configured for the remote peer that you want to add. The Remote peer field is filled with the remote peer from the protection policy.
Remote path
Specify the path on the remote peer to which you want the stream to replicate the data from the protected path. The path you specify must be to a directory that does not yet exist on the remote peer.
Remote tenant
This field appears only if the remote peer has more than one tenant. Select the tenant on the remote peer where you want to create the remote path.
-
Click Update.
When you remove a replication stream from a protected path, VMS removes any associated standby stream(s) on destination clusters.
-
On the primary cluster, in the Protected Paths page, click
to open the Actions menu for the protected path and select Edit.
-
In the Replication Streams area of the Update Protected Path dialog, click the
button for the stream you want to remove.
-
Click Update.
Comments
0 comments
Article is closed for comments.