Ensure that your environment meets the following requirements:
-
A Kubernetes cluster is up and running.
-
A VAST cluster is up and running.
-
All nodes of the Kubernetes cluster are networked with the VAST cluster and can mount it via NFS.
-
At least one node can communicate with the VAST Cluster management VIP.
-
A host is available with a Helm client installed, preferably in the VMS network.
VAST CSI Driver can be used with the following Kubernetes and VAST Cluster versions:
VAST CSI Driver |
Kubernetes |
VAST Cluster |
---|---|---|
2.2.1 |
1.22 - 1.26 |
4.1 or later |
2.2.0 |
1.22 - 1.25 |
4.1 or later |
2.1.1 |
1.22 - 1.25 |
4.1 or later |
2.0.4 |
1.22 - 1.25 |
4.1 or later |
Helm v3 is required to deploy VAST CSI Driver.
Before you begin, ensure that your environment meets the requirements.
VAST CSI Driver 2.2.x is deployed using a Helm chart. A Helm chart is an installation template that can be reused to install multiple instances of the software being deployed. Each instance is referred to as a release. Helm charts are available from Helm repositories.
Steps to implement dynamic provisioning with VAST CSI Driver include:
-
(Optional) Install CustomResourceDefinitions for the Kubernetes API.
Note
This step is required if you are going to use VAST snapshots on the Kubernetes cluster. Otherwise, skip to step 2.
-
(Optional) Create a Kubernetes namespace for VAST CSI Driver.
Note
This step is required if you are going to deploy VAST CSI Driver in a Kubernetes namespace other than
default
. Otherwise, skip to step 3. -
Configure the VAST cluster.
-
Add the Helm repository that contains the VAST CSI Driver chart.
-
Create a VAST CSI Driver chart configuration file.
-
Install the VAST CSI Driver chart.
-
Grant proper privileges to VAST CSI Driver.
-
(Optional) Verify the deployment by launching a test application.
Complete this step if you are going to use VAST snapshots on your Kubernetes cluster. The CustomResourceDefinitions installed in this step are only required for using VAST snapshots.
Run the following commands to install the CustomResourceDefinitions for the Kubernetes API:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
By default, VAST CSI Driver is deployed to the default
namespace on the Kubernetes cluster.
If you want to use a different Kubernetes namespace for VAST CSI Driver, create it prior to deployment by running the following command:
kubectl create ns <namespace_name>
Complete the following steps to make your VAST cluster ready for integration with Kubernetes:
-
Set up VIP pools to be used by VAST CSI Driver.
-
(Optional) Upload your CA-signed SSL certificate to the VAST cluster.
-
Set up a VMS user to be used by VAST CSI Driver.
-
Configure a view policy to be used for views created by VAST CSI Driver and also a view policy used when deleting volumes
VAST CSI Driver distributes the load among virtual IPs in one or more VAST virtual IP (VIP) pool or pools.
You need:
-
A VIP pool for each storage class configured for VAST CSI Driver, and
-
A deletion VIP pool that VAST CSI Driver uses when deleting volumes. The deletion VIP pool can match one of the VIP pools specified for a storage class, or it can be a different view policy.
You can choose an existing VIP pool or pools for use with VAST CSI Driver, or you can create a new one.
To view existing VIP pools via VAST Web UI, log in and choose Network Access -> Virtual IP Pools. If you want to create a new VIP pool, click + Create VIP Pool and complete the fields in the dialog that appears.
Note
For more information about VAST Cluster VIP pools, see Configuring Network Access.
If you want to use a Certificate Authority-signed SSL certificate to secure the connection to the VAST cluster, follow this procedure to upload your SSL certificate to the VAST cluster.
Note
For more information about SSL support, see A Deeper Dive into VAST CSI.
You need to supply a VAST Management Service (VMS) user name and password when creating the YAML configuration file. The credentials will be recorded as Kubernetes secrets.
VAST CSI Driver can use the default admin
user that is created by VAST during the initial install, or you can create a new user. The user doesn't need to have full VMS permissions. The required permissions are Create, View and Edit permissions in the Logical and Security realms.
To create a new user for VAST CSI Driver via VAST Web UI:
-
In the left navigation menu, choose Administrators -> Roles.
-
In the Roles page, click the + Create Roles button.
-
In the Add Role dialog that opens, enter a name for the new role and set Create, View and Edit permissions in the Logical and Security realms.
-
Click Create to create the new role. The new role will be listed in the Roles page.
-
Go to the Managers tab and click the + Add Managers button.
-
In the Add Manager dialog that opens, complete the fields to create a manager user with the newly created role:
-
Enter a username and set a password.
-
Select the newly created role in the Available Roles field to inherit user's permissions from the role.
-
Verify that the user is assigned Create, View and Edit permissions in the Logical and Security realms.
When finished, click Create. The newly created user will be listed in the Managers page.
-
VAST CSI Driver automatically creates a VAST Cluster view for each volume being provisioned. You'll have to specify a view policy to be assigned to these automatically created views. A view policy is specified per Kubernetes storage class configured for VAST CSI Driver.
You need:
-
A view policy to be assigned to these automatically created views, and
-
A deletion view policy that VAST CSI Driver uses when deleting volumes. The deletion view policy can match one the policy assigned to automatically created views, or it can be a different VIP pool.
To view existing view policies via VAST Web UI, log in and choose Element Store -> View Policies. If you want to create a new view policy, click + Create Policy and complete the fields in the dialog that appears. For more information about VAST Cluster view policies, see Managing Views and View Policies.
Ensure that all view policies intended for use with VAST CSI Driver meet the following requirements:
-
All view policies have the same security flavor.
To check the security flavor assigned to a view policy, display the view policy in the VAST Web UI (Element Store -> View Policies -> choose View or Edit in the Actions menu) and find the Security Flavor field on the General tab.
-
All view policies do not have the Root Squash setting enabled.
To check the Root Squash setting in a view policy, display the view policy in the VAST Web UI (Element Store -> View Policies -> choose View or Edit in the Actions menu) and go to the Host-Based Access tab. Under NFS, verify that there are no IPs specified for the Root Squash option.
Add the Helm repository that contains the VAST CSI Driver chart to the list of available repositories:
-
Add the Helm repository for VAST CSI Driver:
helm repo add <repo> https://vast-data.github.io/vast-csi
Specify any suitable name for
repo
. This name will be used to refer to the VAST CSI Driver repository when running Helm commands. For example: vastcsi -
Verify that the VAST CSI Driver repository has been added:
helm repo list
The output is similar to the following:
NAME URL vastcsi https://vast-data.github.io/vast-csi
In Helm-based deployments, a chart configuration file lets you override default installation settings provided in the chart with parameters that are specific to your environment.
The configuration file is a YAML file typically named values.yaml
, although you can use any arbitrary name for it.
-
Create a YAML file as follows:
username: "<username>" password: "<password>" endpoint: "<endpoint>" deletionVipPool: "<deletion VIP pool>" deletionViewPolicy: "<deletion VIP pool>" verifySsl: true|false storageClasses: <storage class name 1>: <option 1> <option 1> ... <option n> <storage class name 2>: <option 1> <option 1> ... <option n> ... <storage class name n>: <option 1> <option 1> ... <option n>
Where:
-
<username>
: Specify the user name that VAST CSI Driver will use to connect to the VAST cluster's VMS. For more information, see Configuring VAST Cluster. -
<password>
: Enter the VAST Cluster user password. -
<endpoint>
: Enter the VAST Cluster management hostname. -
<deletion VIP pool>
: Specify the name of the VAST Cluster VIP pool to be used when deleting volumes. It can match a VIP pool specified in thevipPool
property of a storage class, or you can specify a different VIP pool. For more information, see Configuring VAST Cluster. -
<deletion view policy>
: Specify the name of the VAST Cluster view policy to be used when deleting volumes. It can match a view policy specified in theviewPolicy
property of a storage class, or you can specify a different view policy. For more information, see Configuring VAST Cluster. -
verifySsl: true|false
: Specifytrue
to enable SSL encryption for the connection to the VAST cluster. If set tofalse
, SSL encryption is disabled. -
<storage class name>
: Provide a name to identify the storage class. For more information about Kubernetes storage classes, see Using Multiple Storage Classes.Note
Define at least one storage class.
-
<option 1>
...<option n>
: Specify parameters to be used when provisioning storage for PVCs with this storage class.For each storage class, the following options can be specified:
storageClasses: <storage class name>: vipPool: "<VIP pool name>" storagePath: "<path>" viewPolicy: "<policy name>" volumeNameFormat: "<format>" ephemeralVolumeNameFormat: "<format>" mountOptions: "<options>"
Where:
-
(Required)
vipPool: "<VIP pool name>"
. The name of the virtual IP pool to be used by VAST CSI Driver. For more information, see Configuring VAST Cluster. -
(Required)
storagePath: "<path>"
. The storage path within VAST Cluster to be used when dynamically provisioning Kubernetes volumes. VAST CSI Driver will automatically create a VAST Cluster view for each volume being provisioned.Caution
Do not use the file system root path '/' as the
<path>
. -
(Required)
viewPolicy: "<policy name>"
. The name of the VAST Cluster view policy. The view policy cannot have a root-squash setting enabled.A view policy defines access settings for storage exposed through a VAST Cluster view. For more information about VAST Cluster view policies, see Configuring VAST Cluster.
-
(Optional)
volumeNameFormat: "<format>"
. A format string that controls naming of volumes created through VAST CSI Driver. If not specified, the default formatcsi:{namespace}:{name}:{id}
is used. -
(Optional)
ephemeralVolumeNameFormat: "<format>"
. A format string that controls naming of Kubernetes ephemeral volumes created through VAST CSI Driver. If not specified, the default formatcsi:{namespace}:{name}:{id}
is used. -
(Optional)
mountOptions: "<options>"
. Specify custom NFS mount options.Notice
NFSv4.1 is supported starting with VAST CSI Driver 2.2.1.
For example:
username: "admin" password: "123456" endpoint: "my.endpoint" deletionVipPool: "vippool-1" deletionViewPolicy: "default" verifySsl: true storageClasses: vastdata-filesystem: vipPool: "vippool-1" storagePath: "/k8s" viewPolicy: "default" volumeNameFormat: "csi:{namespace}:{name}:{id}" ephemeralVolumeNameFormat: "csi:{namespace}:{name}:{id}" mountOptions: -"proto=rdma" -"port=20049" vastdata-filesystem2: vipPool: "vippool-2" storagePath: "/test1" viewPolicy: "custom-policy" my-custom-storage-class: vipPool: "vippool-3" storagePath: "/test2" viewPolicy: "custom-policy2"
-
-
-
Verify the newly created chart configuration file:
helm template <release name> <repo>/vastcsi -f <filename>.yaml -n <namespace>
Where:
-
<release name>
identifies the release being deployed. -
<repo>
is the name of the VAST CSI Driver Helm repository. -
vastcsi
is the name of the VAST CSI Driver Helm chart. -
<filename>.yaml
is the VAST CSI Driver chart configuration file. -
<namespace>
determines the Kubernetes namespace to which the release is deployed. If this parameter is not specified, thedefault
namespace is used. Otherwise, create a custom namespace prior to installing the VAST CSI Driver chart.
For example:
helm template csi-driver vastcsi/vastcsi -f values.yaml -n vast-csi
The output is similar to the following:
--- # Source: vastcsi/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: csi-vast-controller-sa namespace: "default" labels: helm.sh/chart: vastcsi-0.1.0 app.kubernetes.io/name: vastcsi app.kubernetes.io/instance: csi-driver app.kubernetes.io/version: "2.2.1" app.kubernetes.io/managed-by: Helm --- # Source: vastcsi/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: <...>
-
Caution
Remove VAST Cluster credentials (username
and password
) from the chart configuration file right after the deployment is complete.
Installing a Helm chart results in deployment of a VAST CSI Driver release in your Kubernetes environment. You can install a chart multiple times to run multiple releases simultaneously. Each release is identified with its release name, which you supply during the install.
To install the VAST CSI Driver chart:
-
Refresh Helm repository information:
helm repo update
-
Run the following command:
helm install <release name> <repo>/vastcsi -f <filename>.yaml -n <namespace> [--set-file sslCert=VastCerts/RootCA.crt]
Where:
-
<release name>
identifies the release being deployed. -
<repo>
is the name of the VAST CSI Driver Helm repository. -
vastcsi
is the name of the VAST CSI Driver Helm chart. -
<filename>.yaml
is the VAST CSI Driver chart configuration file. -
<namespace>
determines the Kubernetes namespace to which the release is deployed. If this parameter is not specified, thedefault
namespace is used. Otherwise, create a custom namespace prior to installing the VAST CSI Driver chart. -
(Optional)
--set-file sslCert=VastCerts/RootCA.crt
specifies the path to a self-signed SSL certificate to secure the connection to the VAST cluster.
For example:
helm install csi-driver vastcsi/vastcsi -f values.yaml -n vast-csi
The output is similar to the following:
NAME: csi-driver LAST DEPLOYED: Sun Apr 4 01:16:37 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Thank you for installing vastcsi. Your release is named csi-driver. The release is installed in namespace default To learn more about the release, try: $ helm status -n default csi-driver $ helm get all -n default csi-driver <...>
-
-
Verify that the VAST CSI Driver chart has been installed as follows:
-
Check the release status with the following command:
helm status -n <namespace> <release name>
helm status -n default csi-driver
-
Ensure that the release appears in the list of releases:
helm list -n <namespace>
For example:
helm list -n default
The output is similar to the following:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION csi-driver default 1 2023-04-04 01:16:37.789235166 +0000 UTC deployed vastcsi-0.1.0 2.2.1
-
VAST CSI Driver requires root (sysadmin) privileges so that it is able to create mounts on the host machine.
To verify your VAST CSI Driver deployment, test it by launching an application using a Persistent Volume Claim (PVC).
In the following example, VAST CSI Driver will create a PVC by provisioning a volume of 1Gi from VAST using the vastdata-filesystem
storage class. It will create a set of 5 pods running an application consisting of Docker containers with mounts to that volume. The application is a shell program that appends a date to a text file.
To verify the deployment with a test application:
-
Create a Kubernetes YAML configuration file with the following content:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vast-pvc-1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: vastdata-filesystem
kind: StatefulSet apiVersion: apps/v1 metadata: name: test-app-1 spec: serviceName: "test-service-1" replicas: 5 selector: matchLabels: app: test-app-1 template: metadata: labels: app: test-app-1 role: test-app-2 spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/shared" name: my-shared-volume command: [ '/bin/sh', '-c', 'while true; do date -Iseconds >> /shared/$HOSTNAME; sleep 1; done' ] volumes: - name: my-shared-volume persistentVolumeClaim: claimName: vast-pvc-1
-
Apply the configuration:
kubectl apply -f <filename>.yaml
-
Monitor the VAST Cluster path specified in the definition of the
vastdata-filesystem
storage class. You will see a PVC per container, in this example 5. In each, a date will be appended to a text file.
Comments
0 comments
Article is closed for comments.