VAST Cluster deployments vary in architecture, scale, topology and networking. Each installation is carefully planned by one of our Sales Engineers before the equipment is shipped to your premises. This guide provides an overview of the options that may be incorporated into your installation plan.
Each installation comprises the following components:
-
One or more CBoxes. Each CBox is populated with four servers called CNodes (compute nodes) that run the vast majority of the logic.
-
One or more DBoxes. A DBox contains the storage media (SSDs) and servers called DNodes which provide access to the resident media.
-
One or more pairs of data switches. The switches provide inter-connectivity between the CNodes and the DNodes (the backend network), and also provide connectivity between the cluster and the client data network (the frontend). The number of switch pairs depends on the numbers of DBoxes and CNodes.
-
Optionally, a management switch is also used to provide management access on an isolated management network.
The CBox and DBox are matched to provide ideal performance for most workloads when deployed in a one-to-one ratio, although since they are separate modules, they can also be combined in a different proportion.
The hardware components vary in vendor and configuration. These are some of the types offered:
The DBox comes in two families of models, CERES and Mavericks, with different form factors and redundancy architectures to support different deployments. Specific model and hardware variants are selected by your system engineer for your environment.
The Mavericks DBox has two controller servers called DNodes that are each connected to all SSDs in the enclosure. The CERES DBox has two canisters called DTrays, each containing two DNodes. In each DTray, each of the two DBoxes is connected to half of the SSDs in the enclosure.
Typical variants are shown in this table:
Product Family |
Model |
Rack Height |
Number of DTrays |
Number of DNodes |
NICs per DNode |
Infrastructure Type |
---|---|---|---|---|---|---|
Ceres |
Ceres 330TB |
1U |
2 |
4 (2 per DTray) |
One dual-port NIC |
Ethernet or EDR InfiniBand |
Mavericks |
Mavericks 1350TB |
2U |
0 |
2 |
Two single-port NICs |
Ethernet or EDR InfiniBand |
Mavericks |
Mavericks 676TB |
2U |
0 |
2 |
One dual-port NIC |
Ethernet or EDR InfiniBand |
Mavericks |
Mavericks 676TB |
2U |
0 |
2 |
Two single-port NICs |
Ethernet or EDR InfiniBand |
The CBox is a 2U quad server chassis with up to four VAST servers (CNodes). CBox hardware varies to accommodate different network topologies and infrastructure. Your CBoxes are selected by your system engineer to meet your needs. They may be provided with one dual-port NIC or with two dual-port NICs per DBox, and may support connection to Ethernet or InfiniBand infrastructure or to a combination of Ethernet or InfiniBand infrastructure.
Note if your CBoxes are Ice Lake, Cascade Lake or Broadwell models, since some instructions may vary.
Description |
Ports per Switch |
Part Number |
Infrastructure Type |
Rack Height |
---|---|---|---|---|
16-port NVMe Fabric Ethernet Switch Pair |
16 x 100GbE |
ETH-NVMEF-2X16 |
Ethernet |
1U (each switch 1/2 width) |
32-port NVMe Fabric Ethernet Switch |
32 x 100GbE |
ETH-NVMEF-1X32 |
Ethernet |
1U each |
64-port NVMe Fabric Ethernet Switch |
64 x 100GbE |
ETH-NVMEF-1X64 |
Ethernet |
2U each |
40-port NVMeF InfiniBand Switch (HDR) |
40 x HDR 200Gb/s |
HDR-NVMEF-1X40 |
InfiniBand |
1U each |
Topology is planned for each installation and a cabling scheme follows the topology design. Typically, the topology is designed to incorporate redundancy and to accommodate the network infrastructure of the client network. The following usually apply but variations are possible:
-
Each CNode and DNode is connected to each of a pair of redundant switches.
-
Each pair of redundant switches is connected one to the other by a redundant pair of MLAGs.
-
Where the cluster has a single pair of switches, each switch is connected to every CNode and to every DNode.
-
Where there is more than one pair of switches, switches are assigned leaf and spine roles. Each CNode and DNode is connected to a pair of leaf switches. All spine switches are connected to all leaf switches and are not connected directly to CNodes and DNodes.
-
In a typical single switch pair topology, each switch is connected to the client network's switches (frontend) with sufficient uplink bandwidth to support full throughput in the event that the other switch fails.
-
In a typical leaf-spine topology, each of the pair of spine switches are connected to the external client network with sufficient uplink bandwidth to support full throughput in the event that the other spine switch fails.
-
In some topologies, CNodes may be directly connected to the client network's switches (frontend). Dual-NIC CNodes can also be connected simultaneously to two different client networks, via the left and right ports of one of the two NICs.
Note
In the case of simultaneous connection of two client networks to the same CNodes, installation cannot be done using the Easy Install utility.
During switch configuration, the switch ports are assigned with roles in accordance to the planned network topology. The port assignments are identical on each switch in each switch pair. When there is more than one switch pair, each pair is configured with a pair index. The cluster must be cabled according to the configured pair indexing and port roles.
For the two common switch topologies, ports are assigned as follows:
-
Single switch pair topology. Here, there is only one pair of switches. On both switches the port assignment layout is identical. Both switches connect to the client data network. Ports are assigned as:
-
CNode ports, which are dedicated for connection to CNodes. These are typically configured as split ports that split a 100 GigE port for connection to two 50 GigE CNode ports. A split cable is used.
-
DNode ports, which are dedicated for connection to DNodes.
-
External ports, which are dedicated to external LAGs connected to the client network.
-
IPL ports, which are dedicated to connection with the other switch in the redundant pair. This is used to configure an MLAG interface between the two switches.
-
-
Spine-Leaf topology. Each switch pair is assigned the role of either spine or leaf. Spine switch pairs are usually used for uplinks to the client network, while leaf switch pairs typically have no external connections and have connections to the CNodes and DNodes. Ports are typically assigned as follows:
-
On leaf pairs:
-
CNode ports, which are dedicated for connection to CNodes. These are typically configured as split ports that split a 100G port for connection to two 50G CNode ports. A split cable is used.
-
DNode ports, which are dedicated for connection to DNodes.
-
ISL ports, which are dedicated to connection to the spine switches.
-
-
On spine pairs:
-
External ports, which are dedicated to external LAGs connected to the client network.
-
IPL ports, which are dedicated to connection with the other switch in the redundant pair. This is used to configure an MLAG interface between the two switches.
-
ISL ports, which are dedicated to connection to the leaf switches.
-
-
The following networks transport different types of traffic throughout every VAST Cluster:
This network is dedicated to the VAST Management System (VMS). By default, the management network is outband, which means it is isolated from the data network. Optionally, the management network can reside on the data network.
Each CNode and DNode in a VAST Cluster has an Intelligent Platform Monitoring Interface (IPMI). IPMI is a low level system management and monitoring interface that is independent of the VAST operating system. It may be used in VAST Support scenarios to take actions such as powering on and off cluster components. Each IPMI is configured with an interface which resides, by default, on the management network. There are also alternative options for configuring IPMI connectivity.
IP details for the management network are supplied during Easy Install to enable management access to the cluster. Both IPv4 and IPv6 are supported.
The management network configuration comprises:
IP |
Easy Install Fields |
---|---|
Default gateway |
|
Subnet mask |
|
A management IP for each CNode and DNode. |
CNode management external IP pool, DNode management external IP pool |
A management IP for each switch. |
Switches External IPs |
A single virtual IP for the VAST Management System (VMS). |
Management VIP |
A single virtual IP per switch pair. Relevant when there is a need to configure an MLAG interface between two switches. |
Switch VIP |
(Optional) An IP for each IPMI on each CNode and DNode. This is only if IPMI is to reside on the management network. If IPMI will reside on the internal network, this setting should not be used. |
CNodes IPMI pool, DNodes IPMI pool |
This network carries internal system traffic and management traffic between the DNodes and CNodes. Optionally, IPMI can reside on the internal network instead of on the management network.
The internal network uses default IPs and therefore does not necessarily require planning and configuration. There are parameters you can customize if needed, during installation:
Easy Install Parameter |
Description |
---|---|
Data VLAN |
The Data VLAN isolates the internal network from the data network. If you anticipate IP address collisions with the default subnet, such as in an IB configuration, you can set a custom subnet. The default data VLAN is 69. |
Management inner VIP |
A virtual IP on the internal network used for mounting the VMS database. By default, the management inner VIP is 172.16.4.254. |
Subnet |
By default, the internal network uses the 172.16 subnet. This can be changed to a custom subnet. |
B2B IPMI and B2B template |
If using the B2B networking configuration that isolates the IPMI network from the management network, you need to enable B2B IPMI and configure a B2B template, which is a subnet used to generate the IPs configured on the IPMI interfaces on the CNodes and DNodes. |
This network carries data traffic between the CNodes and the client data network that is external to the cluster.
The client data network IPs are configured after installation. Plan IPs as follows:
-
For fewer than four CBoxes: plan for four IPs per CNode. For four or more CBoxes: plan for four IPs per CNode minus four IPs, since one CNode is dedicated to VMS with no data traffic.
-
The IPs must be on a separate subnet than the management network's subnet.
-
The IPs need to be routable to the client data network.
-
The IPs are configured as Virtual IP (VIP) pools. You can have multiple VIP pools, each of which can be restricted to specified CNodes in which case it must contain at least CNodes. Load balancing across CNodes is essentially per VIP pool.
A VIP pool can include IPv4 and IPv6 addresses at the same time. Load balancing in this case is done separately: all IPv4 addresses are balanced across all the CNodes, and all IPv6 addresses are balanced across all the CNodes.
For information about how to configure the client data network to enable data client access, see Configuring Network Access.
Note
In the site survey, the space provided for planning the data network IPs is called IP Addresses for Data Network.
When you configure VIP pools, there is an option to tag the VIP pools with VLANs. If you choose to use VLAN tagging, the VLANs must be activated on the switch ports that are used to link to the client network. These ports are assigned as external ports and the VLAN tagging can be done during installation.
IPv6 is supported for the client data network and management networks. A cluster can handle IPv4, IPv6, or both at the same time for any of the currently supported protocols and services (NFS, SMB, S3, AD, LDAP, NIS, DNS, SSL/TLS).
A VIP pool can include both IPv4 and IPv6 addresses at the same time. Load balancing in this case is done separately: all IPv4 addresses are balanced across all the CNodes, and all IPv6 addresses are balanced across all the CNodes.
Each CNode and DNode has an IPMI interface, on which an IP address is configured. These IPs can either reside on the management network or they can be isolated from the management network. The two options are implemented as follows:
IPMI Configuration |
Description |
Network parameters to set in Easy Install |
---|---|---|
Standard |
The IPMI interfaces reside on the management network. |
Provide a pool of IP addresses on the management network for the IPMIs. Configure them in Easy Install as the CNode IPMI pool and the DNodes IPMI pool. |
B2B |
The IPMI interfaces are isolated from the management network. |
|
There are several variations in the way networking infrastructure is designed in a given deployment:
Networking Variant |
Options |
Supporting Configurations |
---|---|---|
The transport protocol of the client data network |
|
Ethernet is supported by deploying Ethernet switches as the data switches in the cluster, with uplinks from the switches to the client data network. Ethernet is the default infrastructure for the internal cluster connectivity and is supported by Ethernet data switches. Connection to IB data networks can be supported either through deploying IB switches in the cluster and using IB infrastructure for the internal cluster connectivity as well, or by connecting the data network directly to CNodes, using a dedicated second Network Interface Card (NIC) on each CNode and configuring those NICs in IB mode. |
CNodes connected simultaneously to multiple separate client data networks |
|
One or more VIP pools must be dedicated to each CNode port in order to serve each connected network. For each VIP pool, you can set port affinity to dedicate the pool to a given port. |
The transport protocol for the internal network |
Ethernet is the default infrastructure for the internal cluster connectivity and is supported by Ethernet data switches. CNode and DNode NICs can support connection to either Ethernet or InfiniBand data switches. |
|
Separation of the internal network from the client data network |
The client data network and the internal network can optionally be isolated from each other. If isolated, they can optionally run on different transport protocols. |
These are the typical implementations:
|
The management network topology |
The management network can be:
|
Out of band management is usually implemented by installing a management switch in the rack, provided by VAST in the shipment. During Easy Install, you specify the management network topology. |
Comments
0 comments
Article is closed for comments.