VAST CSI Driver allows container orchestration frameworks such as Kubernetes to dynamically provision storage volumes on a VAST cluster. VAST provides scalable, reliable, and fast persistent storage that can be accessed remotely by any Kubernetes application container.
With dynamic provisioning, you do not need to create container-specific persistent volumes. Instead, you deploy the following elements that allow for dynamic volume provisioning:
-
A storage class, which is a YAML class definition that specifies a dynamic provisioner and volume-related options.
-
A dynamic provisioner.
When an application makes a Persistent Volume Claim (PVC), the Kubernetes framework employs the dynamic provisioner to dynamically create a persistent volume. Later, when a container (via its YAML) references a PVC, Kubernetes provides the volume to the container. For more information about VAST and Kubernetes interaction, see A Deeper Dive into VAST CSI.
Important
Dynamic provisioning is optimized to provide storage that is accessible only from the Kubernetes containerized environment. Do not use dynamic provisioning if you need to expose existing storage on VAST to both Kubernetes containerized applications and non-Kubernetes applications.
Check the deployment requirements and follow these steps to implement dynamic provisioning with VAST CSI Driver.
Optionally, take advantage of the following capabilities:
-
Support of ephemeral volumes. You can provision PVCs directly from Pod definitions, which are discarded once the pod is terminated.
-
Overriding PVC mount options for a specific PVC
VAST CSI deployment has two components:
-
A controller pod that is responsible for creating and deleting volumes.
-
A node pod responsible for creating the mount on the node (host) it is running on. The node pod contains VAST CSI Driver that invokes mount and umount commands.
A node gets all necessary information about existing VIP pools and views from the controller. It does not communicate with the VAST Management Service (VMS) directly. The only exception to this is for handling Ephemeral Volumes, where the node pod handles the entire volume lifecycle.
The operation sequence is as follows:
-
An application makes a Persistent Volume Claim (PVC), which indirectly references the storage class which, in turn, references the dynamic provisioner for VAST Cluster via CSI.
-
The CSI receives the request to create a persistent volume. Based on the NFS configuration information provided via the storage class, it creates a directory on the VAST cluster (named for the persistent volume) and informs Kubernetes that the volume has been created.
-
Kubernetes allocates the logical persistent volume which references the VAST Cluster directory. At this point, you can see the directory by listing the view on the VAST cluster.
-
Kubernetes begins the application launch sequence and recognizes that the application referenced a PVC.
-
Kubernetes asks the VAST CSI controller to publish the volume to the node where the application is to run.
-
The VAST CSI controller gets the API request and allocates one of the VAST Cluster's virtual IPs (VIPs) to the node.
Note that each volume will get a single VIP on each node. Multiple volumes on the same node or the same volume on different nodes will likely get different VIPs for proper load spreading across volumes and nodes.
-
Kubernetes asks VAST CSI Driver on the node to mount the volume, making it available for the application.
-
Kubernetes causes the application within the pod to start.
-
The application can create a file in what appears to be a local directory within its container, but actually points to a directory on the VAST cluster (the directory being the persistent volume).
Comments
0 comments
Article is closed for comments.