The crux of the configuration for native replication is the protection policy which determines the following parameters:
-
A schedule. You will set the timing and frequency for creating snapshots on the source peer. Provided the replication time does not exceed the frequency, each snapshot will be replicated to the destination peer.
-
A retention period for local snapshots. You can either keep the snapshots on the source peer for a chosen time period or prune them immediately after replication.
-
A retention period for snapshots on the destination peer.
Important
Due to a limitation, it is not possible to change which protection policy controls any given protected path. It is advisable where possible to create a dedicated protection policy per protected path as a workaround, given that:
-
Changes to the snapshot and replication schedules and snapshot retention times of a protected path are supported only through modifying the associated protection policy.
-
Modifying a protection policy affects all associated protected paths.
-
You may need to modify a policy to control a single protected path, such as for timing a replication to complete prior to a failover.
You will need to consider how to set these parameters in order to best meet your objectives. The following are points of consideration:
-
Recovery Point Objective (RPO). In a worst case scenario of disastrous loss of the primary cluster, data that was not yet replicated to a native replication peer would be lost in an ungraceful failover. When scheduling native replication, you may consider the maximum time of writing to the protected path such that you could tolerate losing the delta since the last replicated version. If you want to ensure that data loss in such an ungraceful failover scenario would not exceed the delta you can tolerate losing, you will want to take care that the time taken from the creation of each snapshot on the source peer until the completion of the equivalent restore point on the destination peer is below that limit. This is a function of the frequency you set in the protection policy and the replication rate. The replication rate depends on your connection bandwidth as well as the delta that would be captured in each snapshot.
-
Recovery Time Objective (RTO). This is the time taken to recover operations following a failover event, such as the destruction of the primary cluster in a disaster. Consider how RTO might be affected in each type of failover:
-
In a graceful failover, neither peer is writeable during the delta sync between the last restore point on the destination and the latest writes on the source. Therefore, the greater the discrepancy that is allowed to build up between the source and destination paths, the longer the failover will take. You can minimize this downtime prior to a planned graceful failover by adjusting the snapshot schedule in the short term so that the delta between the latest completed restore point on the destination peer and the latest data written to the source peer is minimal.
-
Since an ungraceful failover failover takes place without syncing any data, the duration of the failover event itself is not affected by the snapshot schedule.
Note
Recovering operations after failover also requires ensuring that the VMS configuration is replicated as needed and client applications are connected to the cluster. For more information, see Deploying a Failed Over Replication Peer as a Working Cluster.
-
-
Capacity usage. A lower retention period on the destination peer will prune snapshots frequently to maintain capacity on the destination peer for frequent snapshots. Also, for failover purposes, only the most recent restore point on the destination is used and needed. However, following a failover event in which a backup cluster becomes the primary cluster, any older replicated snapshots would effectively become local backups.
Note
Each snapshot contains only the changes to the working data since the last snapshot was taken.
-
Snapshot limit per cluster. Snapshots per cluster are limited to 1000. New snapshots are not created if the limit would be exceeded. Therefore, you need to set the expiration and the schedule in all protection policies in use on all protected paths so that the total snapshots on each cluster doesn't reach the limit. You can have up to 128 protected paths per cluster and up to 64 different protection policies.
Caution
You cannot change which protection policy is used on a given protected path. Any changes you make to a protection policy obviously affects all protected paths that use the same policy.
-
Efficiency: Higher frequency of snapshots may capture writes that otherwise would cancel each other out over time and hence use more bandwidth and capacity over time.
-
Bandwidth. The bandwidth of your network connection will affect how fast a restore point can be completed on the destination peer. If the bandwidth is too slow to meet the frequency you set, snapshots will be skipped.
-
Performance. It's possible for replication to impact the performance of the regular data IOs.
These steps complete the configuration of replication between two peers. Further steps are needed to prepare for smooth failover or to perform failover. See Deploying a Failed Over Replication Peer as a Working Cluster.
Follow this workflow to configure native replication:
-
Native Replication: Configure Replication VIP Pools. A dedicated VIP pool must be created on each of the peer clusters. You can use this VIP pool to control which CNodes are used to replicate data between the peers, although this is not mandatory.
-
Native Replication: Create a Replication Peer. This is the configuration of replication to another cluster. The peer configuration is mirrored on the destination peer. You can have one replication peer per cluster.
-
Native Replication: Create a Protection Policy. This is a policy governing the schedule and retention parameters for replicating data to the configured peer.
-
Native Replication: Create (a) Protected Path(s). This defines a data path on the cluster to replicate, the destination path on the peer and the protection policy. You can create multiple protected paths using the same protection policy and replication peer. On the remote peer, you can also set up multiple protected paths with the local peer as the destination. In other words, replication can be set up in both directions between a pair of peers.
You need to configure a replication VIP pool on each one of the two clusters that will be configured as a pair of replication peers. One replication VIP pool is supported on each peer, comprising one continuous range of IPs.
A replication VIP pool is used exclusively for routing replication traffic between the peers and not for serving data to clients. The CNodes that are assigned VIPs from the replication VIP pool are used to communicate directly with the remote peer, while other CNodes can communicate only indirectly with the remote peer.
When you configure a replication VIP pool, you can optionally restrict it to specific named CNodes.
On each peer:
-
From the left navigation menu, select Network Access and then Virtual IP Pools.
-
Click +Create VIP Pool.
-
Complete the fields as follows:
Field
Description
Name
Enter a name for the replication VIP Pool.
Gateway IP
Enter the IP address of the local switch or gateway device through which to route traffic to and from the cluster. In the event that the two peers are actually on the same subnet, you can leave this blank.
Start IP (required)
Enter the IP at the start of the range.
The limit for the pool is 256 IP addresses.
End IP (required)
Enter the IP at the end of the range.
Subnet CIDR (required)
Specify the subnet in Classless Inter-Domain Routing (CIDR) notation.
In CIDR notation, the subnet is expressed as the number of bits of each IP address that represent the subnet address. For example, the subnet mask 255.255.255.0 is expressed as 24 in CIDR notation.
VLAN
If you want to tag the VIP pool with a specific VLAN on the data network, enter the VLAN number (0-4096). See also Tagging VIP Pools with VLANs.
Domain Name
Leave this blank. This field is used for protocols VIP pools.
Role
Set this to Replication.
CNodes
If you want to dedicate a group of CNodes to the replication VIP pool, open the drop-down and select all the CNodes you want to include in the group. You must specify at least two CNodes, which is to provide high availability. The VIPs in this pool will only be distributed among the selected CNodes. No other CNodes will be used for directly routing replication traffic to the remote peer.
We recommend that the number of VIPs in the pool should not exceed the number of dedicated CNodes.
-
Click Create.
The VIP Pool is created.
Use the vippool create command. Set --role
to REPLICATION
. Do not specify a domain name.
For example:
vcli: admin> vippool create --start-ip 203.0.113.2 --end-ip 203.0.113.5 --subnet-cidr 24 --gw-ip 203.0.113.1 --name rep-vippool --role REPLICATION
This step involves establishing a connection to a remote cluster that will be the destination peer. The replication peer configuration is mirrored on the remote cluster as well.
-
From the left navigation menu, select Data Protection and then Replication Peers.
-
Click Create Peer.
-
Complete the fields:
Field
Description
Peer Name
Enter a name for the peer configuration. The peer configuration will be mirrored on the remote cluster and have the same name on both clusters.
For example: VASTmain-VASTbackup
Remote VIP
Enter any one of the VIPs belonging to the replication VIP Pool to use as the leading remote VIP.
The remote VIP is used to establish an initial connection between the peers. Once the connection is established, the peers share their external network topology and form multiple connections between the VIPs.
If the remote peer's replication VIP pool is changed after the initial peer configuration, the new VIPs are learned automatically if the new range of IPs in the modified VIP pool intersects with the previous IP range. However, if the new IP range does not intersect with the old range, the remote VIP must be modified on the local peer.
For example: 198.51.100.200
Local VIP Pool
From the drop-down, select the replication VIP Pool configured on the local cluster.
For example: vippool_rep
-
Click Create.
The replication peer is created and mirrored to the remote cluster. The details are displayed in the Replication Peers page on both the local cluster and the remote cluster.
To create a replication peer via the VAST CLI, run replicationpeer create.
For example:
vcli: admin> replicationpeer create --name vastnativebackup --remote-leading-vip 198.51.100.200 --local-vip-pool-id 3
In this step, you'll create a protection policy for scheduling snapshots on the local cluster and transferring them to the replication peer. Optionally, the policy can specify to retain the snapshots on the local cluster as well as transferring them. The protection policy is mirrored to the replication peer where it can be used for replicating in the reverse direction in the event of a failover.
-
From the left navigation menu, select Data Protection and then Protection Policies.
-
Click + Create Protection Policy.
-
In the Add Protection Policy dialog, complete the fields:
-
Configure a schedule:
-
The scheduling fields provided enable you to set one frequency period and start time. If you want to configure more than one frequency and start time, you can add additional lines by clicking the Add Schedule button.
-
To set a frequency period, select seconds, minutes, hours or days from the Period dropdown and enter the number of units in the Every field.
Note
The minimum interval is 15 seconds.
-
To set the start time, click in the Start at field and a calendar appears. Clicking the start date you want in the calendar and adjust the time :
Note
When a protected path is active, it performs an initial data sync to the replication peer immediately after being created. The initial sync creates the first restore point. Subsequent restore points are created only after the initial sync is complete. If a restore point is created on the start date, it is the second restore point.
-
-
Configure local snapshots policy:
-
If you do not want the policy to keep local snapshots, leave the keep local copy for field blank. Snapshots will be deleted immediately after they are replicated to the destination peer.
-
If you want the policy to retain local snapshots, set the keep local copy for period. This is the amount of time for which local snapshots will be retained on the local cluster.
Select Seconds, Minutes, Hours or Days from the Period dropdown and then enter the number in the keep local copy for field.
-
-
Set the keep remote copy for period. This is the amount of time restore points are retained on the destination peer. Select Seconds, Minutes, Hours or Days from the Period dropdown and then enter the number in the keep remote copy for field.
-
Click Create.
The policy is created and listed in the Protection Policies page. It's also mirrored to the remote cluster defined in the replication peer configuration.
To create a protection policy via the VAST CLI , use the protectionpolicy create command .
For example:
vcli: admin> protectionpolicy create --schedule every 90m start at 2025-07-27 20:10:35 keep-local 10h keep-remote 10d --prefix Snapdir1 --clone-type CLOUD_REPLICATION --name protect-pol1 --peer-id 1
When you have defined a protection policy for native replication, in which a native replication peer is specified, you can define one or more protected paths to start replicating data at specific paths.
Important
Limitations:
-
Data cannot be moved into or out of a path that is protected by either native replication or S3 replication. This applies to moving files or directories from a protected path to a non-protected path, from a non-protected path to a protected path or from one protected path to another protected path.
-
Protected paths with native replication cannot be nested.
-
Protected paths are limited to 128 per cluster, of which 16 can be configured with native replication.
-
In the left navigation menu, select Data Protection and then Protected Paths.
-
On the Replication Paths tab, click + Create Protected Path.
-
In the Add Protected Path dialog, complete the fields:
Field
Description
Path Name
Enter a name for the protected path.
Choose a policy
Select the protection policy that you created for native replication from the dropdown.
Warning
After creating the protected path, it is not possible to change which policy is associated with the protected path. All changes to a protected path's snapshot schedule, replication schedule, and snapshot expiration must be done by modifying the protection policy. Those modifications affect all protected paths that use the same protection policy. To work around this limitation, create a protection policy per protected path, For more information about the need for this workaround in native replication, see Designing a Native Replication Protection Policy.
Path
The path you want to back up. A snapshot of this directory will be taken periodically according to the protection policy.
Note
-
If you specify '/' (the root directory), this includes data written via S3.
-
To specify a path to a specific S3 bucket with name bucket, enter /bucket.
Path on peer
Specify the directory on the native replication peer where the data should be replicated. This must be a directory that does not yet exist on the native replication peer.
Tip
You cannot use "/" as path on peer because that always exists already. Therefore if you would like to replicate all data under the root directory, you will need to replicate this to a subdirectory. e.g. path on peer = "mirror/"
-
-
Click Create.
The protected path is created and listed in the Protected Paths tab.
Use the protectedpath create command.
For example:
vcli: admin> protectedpath create --name backupthisdir --protection-policy-id 1 --source-dir / --target-exported-dir /backup
Comments
0 comments
Article is closed for comments.