Ensure that your environment meets the following requirements:
-
A Kubernetes cluster is up and running.
-
A VAST cluster is up and running.
-
All nodes of the Kubernetes cluster are networked with the VAST cluster and can mount it via NFS.
-
At least one node can communicate with the VAST Cluster management VIP.
-
A host is available with Docker installed, preferably in the VMS network.
Before you begin, ensure that your environment meets the requirements.
Steps to implement dynamic provisioning with VAST CSI Driver include:
-
(Optional) Install CustomResourceDefinitions for the Kubernetes API.
Note
This step is required if you are going to use VAST snapshots on the Kubernetes cluster. Otherwise, skip to step 2.
-
Configure the VAST cluster.
-
Download a Docker image for VAST CSI Driver.
-
Generate a YAML configuration file for the deployment.
-
Deploy the YAML configuration file.
-
(Optional) Verify the deployment by launching a test application.
Complete this step if you are going to use VAST snapshots on your Kubernetes cluster. The CustomResourceDefinitions installed in this step are only required for using VAST snapshots.
Run the following commands to install the CustomResourceDefinitions for the Kubernetes API:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v6.0.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
Complete the following steps to make your VAST cluster ready for integration with Kubernetes:
-
Create a VAST Cluster view to be used for volume provisioning.
-
Set up a VIP pool to be used by VAST CSI Driver.
-
Set up a VMS user to be used by VAST CSI Driver.
VAST CSI Driver creates volume directories at a path within VAST Cluster; for example, /a/b/c
. This path is specified in the root_export
parameter in the StorageClass
YAML configuration. The storage path must be mountable by VAST CSI Driver. This means that there must be at least one VAST Cluster view defined along the path; for example, at /a
, /a/b
, or a/b/c
).
Although any path within VAST Cluster except '/' can be used by VAST CSI Driver to create volume directories, it is recommended to create a dedicated view for dynamically provisioned volumes.
To set up a VAST Cluster view via VAST Web UI:
-
In the left navigation menu, choose Element Store and then Views.
-
In the Views page that opens, click the + Create View button.
-
In the Add View dialog that opens, complete the fields as follows:
-
Path: Specify the full path to the location exposed by the view; for example, /k8s.
-
Protocols: Choose the NFS protocol from the list.
-
NFS alias: Specify the alias for the NFS export; for example, /k8s.
-
Policy name: Choose a suitable view policy from the list. The view policy cannot have a root-squash setting enabled.
-
Create Directory: Enable this setting.
-
-
Click Create to create the view.
Note
For more information about VAST Cluster views, see Managing Views and View Policies.
VAST CSI Driver distributes the load among virtual IPs in a VAST virtual IP (VIP) pool. You can choose an existing VIP pool to be used by VAST CSI Driver or create a new one.
To view existing VIP pools via VAST Web UI, log in and choose Network Access -> Virtual IP Pools. If you want to create a new VIP pool, click + Create VIP Pool and complete the fields in the dialog that appears.
Note
For more information about VAST Cluster VIP pools, see Configuring Network Access.
You'll have to supply a VAST Management Service (VMS) user name and password when creating the YAML configuration file. The credentials will be recorded as Kubernetes secrets.
VAST CSI Driver can use the default admin
user that is created by VAST during the initial install, or you can create a new user. The user doesn't need to have full VMS permissions. The required permissions are Create, View and Edit permissions in the Logical and Security realms.
To create a new user for VAST CSI Driver via VAST Web UI:
-
In the left navigation menu, choose Administrators -> Roles.
-
In the Roles page, click the + Create Roles button.
-
In the Add Role dialog that opens, enter a name for the new role and set Create, View and Edit permissions in the Logical and Security realms.
-
Click Create to create the new role. The new role will be listed in the Roles page.
-
Go to the Managers tab and click the + Add Managers button.
-
In the Add Manager dialog that opens, complete the fields to create a manager user with the newly created role:
-
Enter a username and set a password.
-
Select the newly created role in the Available Roles field to inherit user's permissions from the role.
-
Verify that the user is assigned Create, View and Edit permissions in the Logical and Security realms.
When finished, click Create. The newly created user will be listed in the Managers page.
-
VAST CSI Driver is distributed as a Docker image. The image contains VAST CSI code and code for generating Kubernetes deployment YAML.
To download the latest version of VAST CSI Driver from https://hub.docker.com/r/vastdataorg/csi:
docker pull vastdataorg/csi:v2.1.1
Once the Docker image is downloaded, you can export it as a .tar file and load into a local Docker registry (for clusters that are not connected to the Internet):
docker save vastdataorg/csi:v2.1.1 -o csi-vast.tar docker load -i csi-vast.tar docker tag vastdataorg/csi:v2.1.1 <your-registry>/vast-csi:v2.1.1 docker push <your-registry>/vast-csi:v2.1.1
A Kubernetes deployment YAML configuration file provides a definition of Kubernetes objects to be created on the Kubernetes cluster.
To generate the YAML configuration file for VAST CSI Driver:
-
Run the following Docker command to generate the YAML configuration file:
docker run -it --net=host -v `pwd`:/out vastdataorg/csi:v2.1.1 template
Where:
-
`pwd`:/out
means that the template will be generated in the current directory. -
vastdataorg/csi:v2.1.1
is the name of the Docker image for the VAST CSI Driver.Kubernetes uses this name to distribute the image within the Kubernetes cluster.
Note
If you use a local Docker registry, change the name as appropriate, for example:
<your_registry>/vast-csi:v2.1.1
. Note that YAML generation code defaults tovast-csi
. If you are going to pull from the public Docker registry, specifyvastdataorg/csi:v2.1.1
.
-
-
Respond to prompts as follows:
-
Name of this Docker Image: Specify the Docker image name, which can reference a remote registry for Kubernetes to pull the image from, or a local registry where the image exists.
-
Image Pull Policy: Specify the Docker image pull policy.
-
Vast Management hostname: Specify the DNS name you use to connect to the VAST cluster's VMS in your web browser.
-
Vast Management username: Specify the user name that VAST CSI Driver will use to connect to the VAST cluster's VMS. For more information, see Setting up a VMS User.
-
Vast Management password: Enter the VAST Cluster user password.
-
Virtual IP Pool Name: The name of the virtual IP pool to be used by VAST CSI Driver. For more information, see Setting up a VIP Pool.
-
NFS Export Path: The storage path within VAST Cluster to be used when dynamically provisioning Kubernetes volumes. For more information, see Creating a VAST Cluster View.
Caution
Do not specify '/' as the NFS Export Path.
-
Additional Mount Options: Specify additional mount options, if needed. You can specify any custom NFS mount options, for example
proto=rdma,port=20049
.
For example:
docker run -it --net=host -v `pwd`:/out vastdataorg/csi:v2.1.1 template Unable to find image 'vastdataorg/csi:v2.1.1' locally Trying to pull repository docker.io/vastdataorg/csi ... v2.1.1: Pulling from docker.io/vastdataorg/csi Digest: sha256:5dc0a5df9ffc2bc5b7cdfb4f59c47105457b8e6fba74a04045df1e6d5a94c207 Status: Downloaded newer image for docker.io/vastdataorg/csi:v2.1.1 Vast CSI Plugin - Deployment generator for Kubernetes Name of this Docker Image: vastdataorg/csi:v2.1.1 Image Pull Policy (Never|Always|IfNotPresent|Auto): InNotPresent Vast Management hostname: selab202.vastdata.com Vast Management username: admin Vast Management password: ****** Connected successfully! VMS Version: <version> - VIP Pools: main - Exports: /, /nfsshare Virtual IP Pool Name: main NFS Export Path: /k8s Additional Mount Options: Written to vast-csi-deployment.yaml Inspect the file and then run: >> kubectl apply -f vast-csi-deployment.yaml Be sure to delete the file when done, as it contains Vast Management credentials
-
The YAML configuration file is generated. You can proceed to deploying it in the Kubernetes cluster.
Caution
The generated YAML configuration file contains VAST Cluster credentials. Delete this file after the deployment is complete. If necessary, you can regenerate it at any time.
When you deploy a Kubernetes YAML configuration file, you create Kubernetes objects in the Kubernetes cluster.
After you have generated a YAML configuration file, review it and edit as appropriate (if needed).
To deploy the generated YAML configuration file:
-
Run the following command:
kubectl apply -f vast-csi-deployment.yaml
-
When the command completes, run the following command and verify that you can see one
csi-vast-controller-0
plus acsi-vast-node-xxxxx
per worker node running. If they are shown as running, this means you have deployed VAST CSI Driver successfully.kubectl get pods --all-namespaces -o wide
The output is similar to the following:
NAMESPACE NAME READY STATUS <...> <...> vast-csi csi-vast-controller-0 3/3 Running <...> vast-csi csi-vast-node-45mwc 2/2 Running <...> vast-csi csi-vast-node-jhxm7 2/2 Running <...> vast-csi csi-vast-node-kgq4b 2/2 Running <...> <...>
-
Delete the
vast-csi-deployment.yaml
file.The generated YAML configuration file contains VAST Cluster credentials. Delete this file after the deployment procedure is complete. If necessary, you can regenerate it at any time.
To verify your VAST CSI Driver deployment, test it by launching an application using a Persistent Volume Claim (PVC).
In the following example, VAST CSI Driver will create a PVC by provisioning a volume of 1Gi from VAST. It will create a set of 5 pods running an application consisting of Docker containers with mounts to that volume. The application is a shell program that appends a date to a text file.
To verify the deployment with a test application:
-
Create a YAML configuration file with the following content:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vast-pvc-1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: vastdata-filesystem
kind: StatefulSet apiVersion: apps/v1 metadata: name: test-app-1 spec: serviceName: "test-service-1" replicas: 5 selector: matchLabels: app: test-app-1 template: metadata: labels: app: test-app-1 role: test-app-2 spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: "/shared" name: my-shared-volume command: [ '/bin/sh', '-c', 'while true; do date -Iseconds >> /shared/$HOSTNAME; sleep 1; done' ] volumes: - name: my-shared-volume persistentVolumeClaim: claimName: vast-pvc-1
-
Apply the configuration:
kubectl apply -f <filename>.yaml
-
Monitor the VAST Cluster path specified in the
root_export
parameter in the YAML configuration file. You will see a PVC per container, in this example 5. In each, a date will be appended to a text file.
Comments
0 comments
Article is closed for comments.