Follow this guide to run the Easy Install utility after racking and cabling the cluster hardware and configuring the switches.
Caution
Consult your VAST Support Engineer for assistance with the prior steps.
Important
Follow a prepared plan for the specific installation when completing all fields. Specifically, specific input values for your installation should be specified in the Easy Install Wizard Settings tab of your site survey. The field descriptions below are intended as guidance to help you implement a planned configuration.
For a general understanding of the deployment options that might be planned for your installation, see VAST Cluster Deployment Overview
-
Configure the Ethernet interface on your laptop to be on the following subnet: 192.168.2.0/24.
-
Connect your laptop to the technician port on any one of the CNodes. This CNode will become the Management CNode.
-
Run the following commands to copy the VAST Cluster package file (e.g. release-3.6.0-123456.vast.tar.gz) and the vast_bootstrap.sh script to the CNode.
scp <package file> vastdata@192.168.2.2:/vast/bundles/ scp <bootstrap file path> vastdata@192.168.2.2:/vast/bundles/
where
<package file path>
is the local path to the package file and<bootstrap file path>
is the local path to the file vast_bootstrap.sh.Note
Make sure there is only one VAST Cluster package file located at /vast/bundles/ since vast_bootstrap.sh cannot select from multiple package files.
You'll be prompted for the password on running each command. The default password is vastdata.
-
Log into the management CNode via SSH and run the
vast_bootstrap.sh
script:username@host:~$ ssh vastdata@192.168.2.2 [vastdata@localhost ~]$ cd /vast/bundles [vastdata@localhost bundles]$ chmod +x vast_bootstrap.sh [vastdata@localhost bundles]$ ./vast_bootstrap.sh
-
Confirm the action:
Are you sure you want to reimage? this will wipe the current system [Y/n] Y unpacking release-3.6.0-123456.vast.tar.gz, this may take a while
The script extracts the package files and runs the Vast Management Server (VMS) container.
-
When the bootstrap script is complete, the following message is displayed.
bootstrap finished, please connect at https://192.168.2.2
While still connected to the tech-port, open a web browser on your laptop and browse to https://192.168.2.2.
The VAST Web UI opens and displays the VAST DATA - End User License Agreement.
-
Click I Agree.
The login page appears.
-
Log in using the default admin user and password:
-
Username: admin
-
Password: 123456
-
-
The Cluster Installation dialog appears, presenting the Included Nodes screen.
At this stage, the Easy Install utility attempts to discover the CNodes and DNodes that comprise the cluster.
Nodes are discoverable provided the switches were configured before you began running Easy Install.
This screen displays all connected nodes and all hardware errors detected on the discovered nodes. Errors may pertain to CPU, memory, disks, NVRAMs, port connectivity, or licensing issues.
Note
If nodes are not discovered, the switches in the cluster require configuration.

1 |
Included Nodes. Displays all discovered nodes grouped by the CBox and DBox in which they are housed. By default, they are all included in the installation. |
2 |
Errors panel. Displays any and all detected errors. |
Do the following:
-
Under Included Nodes, review the details of the discovered nodes and verify that all nodes are discovered:
1
Each CBox and DBox is listed, with the final six digits of the CBox's or DBox's serial number.
Expand each CBox and DBox to see the CNodes and DNodes housed in each one.
2
Node type:
-
CNode
-
DNode
3
IP address. Usually an IPv6 address.
4
Host name
5
OS version
6
Vendor. Possible values:
-
CNode:
-
Broadwell, single dual-port NIC
-
CascadeLake, two dual-port NICs
-
CascadeLake, single dual-port NIC
-
-
DNode:
-
Sanmina, single dual-port NIC. This is a Mavericks DNode with a single dual port NIC.
-
Sanmina, two single-port NICs. This is a Mavericks DNode with two single port NICs.
-
Supermicro, single dual-port NIC. This is a Mavericks DNode with a single dual port NIC.
-
BlueField, SoC PCIe Bridge. This is a CERES DBox DNode.
-
-
-
In the Errors Panel, review any errors that may have been detected during validation.
The error text refers to the affected node, enabling you to match each error to a node listed above. In order to identify the position of the affected node, you can hover over a node to see where it is located in its CBox or DBox:
-
Resolve any issues before continuing with the installation. In the event that faulty hardware was received in the shipment, consult VAST Support on how to proceed.
The following options are available:
-
You can remove and either fix and reinsert, or replace a faulty component with a new one. After replacing, click Refresh Hardware to repeat host discovery and validation. Check again the discovered hosts and errors.
-
Exclude nodes. In case of validation errors that cannot be resolved on site before continuing and are critical, you can identify the affected node and exclude it from the installation. In this case, please report the errors to Support and arrange return and replacement of hardware. Replacement nodes can be added to the cluster once it is already active.
To exclude a node:
-
-
When no errors remain, or when any remaining errors are determined not to be critical to the installation, click Continue to General Settings.
-
Complete the Required settings:
Important
You may find that Easy Install fills the field values from a previous installation. You can use the Clear All Settings button to clear all filled values and make sure you don't set the wrong values for the current installation.
Note
Each Restore to Defaults button sets all required values in the section where it appears to their defaults.
Cluster name
A name for the cluster.
PSNT
The cluster's PSNT. PSNT is an asset identifier that links the components of a cluster.
A single virtual IP configured on the management interfaces on all CNodes. VAST Management System (VMS) listens on this IP. This IP should be on the management subnet.
The subnet for the management network in CIDR notation.
The default gateway of the management network.
CNode network topology
This field sets the modes of each CNode interface to the required transport protocol depending on the network infrastructure topology. The options available vary with the CNode model.
All models:
-
Ethernet. Sets all interfaces to Ethernet mode. For Ethernet infrastructure on both the internal network and the data network.
-
Infiniband. Sets all interfaces to Infiniband mode. For Infiniband infrastructure on both the internal and external networks.
Available only for CascadeLake CNodes with two dual-port NICs:
-
Internal ethernet external infiniband. Sets the interfaces connected to the internal network to Ethernet mode and the interfaces connected to the data network to Infiniband mode.
-
Internal infiniband external ethernet. Sets the interfaces connected to the internal network to Infiniband mode and the interfaces connected to the data network to Ethernet mode.
DNode network topology
This field sets the modes of the DNode interfaces to work with the network infrastructure topology. Select one of the following:
Management network
This field specifies the the management network topology:
-
Inband. Management is on the data network.
-
Outband. Management is isolated from the management network.
DNS IPs
The IP(s) of any DNS servers that will forward DNS queries to the cluster. Enter one IP address or a comma separated list of IP addresses.
For example: 172.30.100.20,172.30.100.20
Cnode management external IP pool
The IP pool from which to assign IPs for the management network to all CNodes (see VAST Cluster Deployment Overview). The pool should contain enough IPs for all CNodes in the cluster.
To add IPs:
-
Click inside the field. A Management CNode External IP Pool dialog appears in the IP pool area.
At the top of the dialog, a message appears telling you how many IPs to add.
-
Click Add new IP. Add one IP or a series of IPs separated by commas.
-
Click Add.
-
Repeat the previous two steps as needed until all IPs in the pool are entered.
-
Click Save Changes.
The IPs are added, and appear in the field as a comma separated list of IPs.
For example, for an installation with one CBox, there are four CNodes, so you need to supply four IPs that were designated for the management external IP pool in the installation plan. The recommendation "You should add exactly 4 IPs" is displayed.
Example: 173.30.200.100,173.30.200.101,173.30.200.102,173.30.200.103
Dnode management external IP pool
The IP pool from which to assign IPs for the management network to all DNodes. The pool should contain enough IPs for all DNodes in the cluster.
To add IPs:
-
Click inside the field. A Management External IP Pool dialog appears in the IP pool area.
At the top of the dialog, a message appears telling you how many IPs to add.
-
Click Add new IP. Add one IP or a series of IPs separated by commas.
-
Click Add.
-
Repeat the previous two steps as needed until all IPs in the pool are entered.
-
Click Save Changes.
The IPs are added, and appear in the field as a comma separated list of IPs.
For example, for an installation with one Mavericks DBox, there are two DNodes, so you need to supply two IPs that were designated for the management external IP pool in the installation plan. The recommendation "You should add exactly 2 IPs" is displayed.
Example: 173.30.200.104,173.30.200.105
-
-
In the lower section of the General Settings page, click Start with General Settings. Set any of the following that are needed for your installation:
Note
Each Restore to Defaults button sets all required values in the section where it appears to their defaults.
CNodes IPMI pool
An IP pool from which to assign an IP to the IPMI interface of each CNode.
Set this IP pool if and only if the planned deployment uses the standard IPMI network configuration .
If you are deploying the B2B IPMI networking option, do not configure this IP pool. Configure a B2B template instead (see step 4).
The CNodes will be assigned IPMI IPs in the same order as they are assigned management external IPs. The CNode that receives the first IP in the management external IP pool receives the first IP in the CNodes IPMI pool and so on.
To add IPs:
-
Click inside the field. A CNodes IPMI Pool dialog appears in the IP pool area.
-
Click Add new IP. Add one IP or a series of IPs separated by commas.
-
Click Add.
-
Repeat the previous two steps as needed until all IPs in the pool are entered.
-
Click Save Changes.
The IPs are added, and appear in the field as a comma separated list of IPs.
Example, : 173.30.200.110,173.30.200.111,173.30.200.112,173.30.200.113
DNodes IPMI pool
An IP pool from which to assign an IP to each IPMI interface.
For Mavericks DBoxes, provide an IP per DNode.
For CERES DBoxes, provide an IP per DTray. This is half of the number of DNodes.
Set this IP pool if and only if the planned deployment uses the standard IPMI network configuration .
If you are deploying the B2B IPMI networking option, do not configure this IP pool. Configure a B2B template instead (see step 4).
Add IPs as described for CNodes IPMI Pool.
The DNodes will be assigned IPMI IPs in the same order as they are assigned management external IPs. The DNode that receives the first DNode IP in the management external IP pool receives the first IP in the DNodes IPMI pool and so on. (For CERES DNodes, the IPMI IP is duplicated on both DNodes in each DTray. Otherwise, the order is the same in principle.)
Example, : 173.30.200.114,173.30.200.115
IPMI default gateway
The IP of a default gateway for the IPMI interfaces on the CNodes and DNodes, if different from the management network default gateway.
For example: 173.30.200.1
DNS search domains
Enter the domains on your data network on which client hosts may reside. If you provide these, you will be able to specify hosts by name instead of IP when setting up export policies, callhome settings, webhook definitions and so on. VAST Cluster will use these domains to look up host IPs on the DNS server.
Eth MTU
MTU size for CNode and DNode Ethernet interfaces. The MTU should be aligned with the switches.
Default: 9216
For installations with dual NIC CNodes, see also Eth NB MTU.
Eth NB MTU
For dual NIC CNode installations, use this field if you need to set a different MTU for NICs with that are connected directly to an external Ethernet data network. If not specified, Eth MTU is used. If Eth MTU is not specified, the default value, 9216, is used.
IB MTU
MTU size for CNode and DNode Infiniband interfaces. Default: 2044
For installations with dual NIC CNodes, see also IB NB MTU.
IB NB MTU
For dual NIC CNode installations, use this if you need to set a specific MTU for NICs with that are connected directly to an external Infiniband data network. If not specified, IB MTU is used. If IB MTU is not specified, the default value, 2044, is used.
Take care to set a supported MTU for the NIC mode:
NTP server
The IP(s) of any NTP server(s) that you want to use for time keeping. Enter as a comma separated list of IPs.
For example: 172.30.100.10
Customer IP
An IP on the client data network. This IP is used to test connectivity.
Management inner VIP
A virtual IP on the internal network that is used for mounting the VMS database.
Default: 172.16.4.254
B2B template
B2B is a networking configuration option that isolates the IPMI network from the management network. A B2B IP is generated per node as 192.168.3.x, where x is a node index. Optionally, you can set a different B2B template. For example, if you set the B2B template to be 10.250.100 then the B2B IPs will be 10.250.100.x.
Default: 192.168.3
Selected nic
In a cluster that has dual NIC CNodes of which only a single NIC is connected, use this option to specify which NIC to skip when automatically configuring the networking modes of the NIC interfaces. The installer must skip the unused NIC. The options are:
-
Internal. If the NIC to the right of the CNode panel (used for internal connectivity in the default scheme) is not connected.
Note
Not available for Ice Lake CBoxes.
-
External. If the NIC to the left of the CNode panel (used for external connectivity in the default scheme) is not connected.
Note
This is the only available and valid option for an unconnected NIC on Ice Lake CNodes. In the case of Ice Lake models, when facing the rear panel, the NIC that can be left unconnected is the left NIC on the two right CNodes, but it's the right NIC on the two left CNodes.
-
None (default). Leave this option selected if both NICs are connected.
Selected IB type
The mode of the Infiniband interfaces:
-
Connected (default)
-
Datagram
Set this to match the Infiniband type of the internal VAST network, if applicable.
Selected NB IB type
For dual NIC CNodes where each CNode has a NIC connected to an external Infiniband network, select the Infiniband type of the external work:
License
Enter the license key for the cluster.
If no license key is entered, a temporary 30 days license is installed.
Reverse nics
Note
This setting is not applicable for Ice Lake models of CBox.
This setting applies to dual NIC CNodes. Enable this setting if the network connectivity scheme for the NICs needs to be reversed from the default.
In the default scheme, the left NIC is dedicated to the external network. The two QSFP28 ports on the left NIC are connected to the client data network switches. The right NIC is dedicated to the internal network and its ports are connected to the cluster switches. If your installation plan follows this default connectivity scheme, do not enable Reverse nics.
Enable Reverse nics only if this scheme is reversed in according to your installation plan. In the reverse scheme, the left NIC QSFP28 ports on each CNode connect to the cluster switches while the right NIC ports connect to the client network switches (external to the cluster).
Encryption
Enables encryption of data at rest on the cluster.
VAST Cluster encryption of data at rest is implemented using the VAST Data FIPS Object Module for OpenSSL, which is certified compliant with the requirements of FIPS 140-2. The NIST validation for the module can be found at https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4107.
Similarity
Enables similarity data reduction on the cluster. This can also be enabled or disabled after installation.
DBox HA
Enables NVRAM high availability (HA) for DBoxes.
Support for DBox NVRAM HA is limited. Before enabling this feature, it is recommended to read about it here. It is possible to enable the feature at a later time after installation, although it will cause a drive layout rewrite.
B2B IPMI
Enables auto configuring the IPMI ports on the nodes with IP addresses according to the B2B template.
-
-
Select Customized Network Settings and set any of the following settings that are needed for the installation:
Note
Each Restore to Defaults button sets all required values in the section where it appears to their defaults.
-
Select Call Home and complete these settings to enable VAST Link, the remote call home service that enables VAST’s Support team to monitor and troubleshoot clusters.
VAST Link sends non sensitive data securely from your VAST Cluster to our central support server to enable us to provide proactive analysis and fast response on critical issues. The data we collect is sent by HTTPS to a VAST Data AWS S3 bucket that we maintain for this purpose.
Note
Each Restore to Defaults button sets all required values in the section where it appears to their defaults.
-
If needed, select Advanced settings from the same dropdown and set additional parameters.
Caution
Do not change Advanced settings unless guided to do so by VAST Support.
-
Click Review Installation Before Executing.
The Summary page appears.
-
Review the summary of all the configurations you set. Check that your settings match the installation plan.
-
If any settings are incorrect, use the To Step links from the configuration summaries to return to earlier steps, make the necessary changes, and proceed back to the Summary page.
-
When you're ready to proceed, click Install.
-
Select Activities from the left navigation menu to navigate to the Activities page and monitor the task progress.
The task name is cluster_deploy.
When installation is done, the cluster_deploy task state changes to COMPLETED and the cluster status displayed at the top left of the page changes to Online:
You can now disconnect from the technician port. The cluster's VAST Web UI is now accessible by browsing to the configured VMS VIP from network locations that have network access to the VMS VIP.
To begin managing the cluster, browse to the VMS VIP and log in using the default user name admin and password 123456.
Comments
0 comments
Article is closed for comments.