VAST Cluster listens for requests on IP endpoints that are virtual, and therefore called Virtual IPs (VIPs). These endpoints are for all data traffic for all protocols (NFSv3, NFSv4.1, SMB and S3).
To configure the VIPs, you need to create VIP pools. VIP pools are ranges of IP addresses that VAST Cluster can use to listen for data traffic. All VIPs in a configured VIP pool are distributed evenly among all active CNodes or a group of CNodes. If a CNode fails, the VIPs assigned to it are automatically moved to other active CNodes, ensuring that clients can continue to connect to stable IP addresses.
This model provides for load balancing and transparent failover among the CNodes.
*You can choose to limit a VIP pool to a specific group of CNodes. See Limiting Views to Specific VIP Pools.
Clients can mount VAST Cluster views using DNS names. Every client is allocated a single VIP per mount. The distribution can be handled most easily using the VAST Cluster DNS server. Configuring the VAST Cluster DNS Server simplifies the configurations you need to do on your external DNS server. You can alternatively configure an external DNS server to handle all DNS forwarding to VIPs. With either alternative, VIPs are allocated in a round robin scheme. It's possible to configure multiple VIP pools and to set different domain names per VIP pool. For further information, see DNS-Based VIP Distribution.
VAST Data recommends a minimum of two VIPs per CNode. For optimal load balancing, we encourage four VIPs per CNode for clusters with one Cbox (four CNodes); and four or more VIPs per CNode for larger clusters.
When determining how many VIPs to configure, the following considerations apply:
-
Since each CNode has two ports listening for data traffic, there should be at least two VIPs available to each CNode.
-
It is desirable for the number of VIPs to be evenly divisible by the number of CNodes. That way when all CNodes are running, VIPs will be spread evenly.
-
More VIPs improve balancing on failure. In case a CNode with only one VIP were to fail, the one VIP for that node would be moved to one active CNode, doubling the work of that CNode. If the failed CNode had more than one VIP, each VIP could be moved to a different active CNode. Therefore, the more VIPs there are per CNode, the better the system is able to balance load on failure. With one VIP per CNode, 100% of that load is moved to one other CNode. With two VIPs, 50% is moved to one and 50% to another. With three VIPs, the ratio is 33%/33%/33%, and so on.
-
If the cluster has four or more CBoxes, the management CNode (the CNode that runs VMS) is not assigned VIPs.
You can limit a VIP pool to a specific group of CNodes, in order to dedicate those CNodes to a specific set of hosts or applications.
You can limit the set of VIP pools that can access a given view. This is configured in the view policy.
Yes, you can use VLAN tagging if you wish to control which VIP pools are accessible on specific VLANs on your data network.
VMS runs on one of the CNodes in the cluster. In the event that the CNode hosting VMS fails, VMS is moved to another CNode. VIP pools have an optional setting, VMS preferred, which you can use to configure a preferred domain for VMS election. Enabling VMS preferred on a given VIP pool specifies that the CNodes participating in the VIP pool belong to a VMS-preferred domain.
If a VMS-preferred domain is configured, then, in the event that the VMS CNode becomes unavailable, VMS moves to one of the CNodes in the VMS-preferred domain unless all CNodes in the VMS-preferred domain are offline. If all CNodes in the VMS-preferred domain are offline, VMS is started on a non VMS-preferred CNode and then moved to a VMS-preferred CNode when one is active and stable.
In clusters that have fewer than 16 CNodes, the CNode that hosts VMS can concurrently host one or more VIPs from VIP pools. In clusters with 16 or more CNodes, one CNode is dedicated to VMS. In this larger cluster case, when VMS moves to a CNode, the VIPs on the CNode are redistributed to other CNodes.
Therefore, with larger clusters, if a VIP pool is configured with a CNode group, then having VMS move to one of the CNodes in the CNode group means that the VIP pool loses that CNode for the time being.
Note
VMS preferred cannot be enabled on VIP pools that have a CNode group with fewer than three CNodes.
Comments
0 comments
Article is closed for comments.