Choose your upgrade path:
-
To upgrade from 2.2.0 to 2.2.x:
-
Follow the procedure for upgrading a Helm-based deployment.
-
(Optional) Run a script to create View-per-Volume views for existing PVCs.
-
-
To upgrade from 2.1.x to 2.2.x:
-
To upgrade from 2.0.x to 2.2.x
-
To upgrade from 1.0.x to 2.2.x:
Containers referencing VAST CSI Driver will run with the new version of VAST CSI Driver the next time they are created.
Use the following procedure if both old and new release of VAST CSI Driver are deployed using a Helm chart.
-
Refresh Helm repository information to see which VAST CSI Driver versions are available for deployment:
helm repo update
-
Update the Helm chart configuration file (
values.yaml
) to specify the target release of VAST CSI Driver:csiImageTag: <new release>
For example:
csiImageTag: v2.2.5
-
Run Helm upgrade:
helm upgrade <old release> <repo>/vastcsi -f <filename>.yaml
Where:
-
<old release>
is the VAST CSI Driver release to be upgraded from. -
<repo>
is the name of the VAST CSI Driver Helm repository. -
vastcsi
is the name of the VAST CSI Driver Helm chart. -
<filename>.yaml
is the Helm chart configuration file.
For example:
helm upgrade csi-driver vastcsi/vastcsi -f values.yaml
-
-
Verify the status of the new VAST CSI Chart release:
helm ls
Run the view creation script create-views.py
as part of migrating your environment from VAST CSI Driver 2.2.0 to VAST CSI Driver 2.2.1 and later.
Note
Running this script is an optional step during the upgrade.
Starting with version 2.2, VAST CSI Driver automatically creates a VAST Cluster view for each volume it provisions. A view policy is assigned to control access to automatically created views.
The view creation script creates a view for each PVC provisioned with pre-2.2.1 versions of the VAST CSI Driver and assigns a user-supplied view policy to such views. All quotas which were previously under the same root export view, get their own views pointing to the same location, with no impact on the data being stored. This ensures that the PVCs remain accessible in case the root export view is manually deleted.
To run the view creation script:
-
Download the script from the VAST GitHub space:
wget https://raw.githubusercontent.com/vast-data/vast-csi/v2.2/scripts/create-views.py
-
Run the script on a host where
kubectl
is already installed and configured with the target Kubernetes cluster.Use the following syntax:
create-views.py --view-policy <view policy>
Where
<view policy>
specifies an existing VAST Cluster view policy to be assigned to the automatically created views.For example:
>> python3 create-views.py --view-policy mypolicy
-
Verify that the script completes successfully.
You have to migrate Kubernetes persistent volumes (PVs) that were created with VAST CSI Driver 2.0.x and below to be able to use the PVs with VAST CSI Driver 2.2.x.
Each PV created with VAST CSI Driver versions 2.1.0 and later has the following parameters in its volumeAttributes
configuration entry:
-
root_export
. This is the path within the VAST cluster where the volume directory is created. -
vip_pool_name
. This is the name of the VAST VIP pool to be used when mounting the volume. -
mount_options
. This parameter lists custom NFS mount options to be used when mounting the volume. If no custom mount options are needed, the parameter is specified with an empty value ('').
PVs created with earlier versions of VAST CSI Driver do not have these attributes and thus cannot be mounted with VAST CSI Driver 2.2.x for Kubernetes pods.
To add the missing attributes to PVs created with pre-2.1.1 versions of VAST CSI Driver, download and run the PV migration script provided by VAST.
Run the PV migration script migrate-pv.py
as part of the procedure to migrate your environment from VAST CSI Driver 2.0.x or earlier to VAST CSI Driver 2.1.1.
The PV migration script adds root_export
and vip_pool_name
attributes to Kubernetes persistent volumes (PVs) created with VAST CSI Driver 2.0.x or earlier. After running this script, VAST CSI Driver 2.1.1 is able to mount the updated PVs for Kubernetes pods.
VAST recommends running the PV migration script before you remove the old version of VAST CSI Driver to let the script automatically determine and set proper root_export,
vip_pool_name
and mount_options
values.
However, you can choose to run the script after having removed the old version of VAST CSI Driver and installed VAST CSI Driver 2.1.1. In this case, you have to specify the root_export
, vip_pool_name
and mount_options
values manually.
Complete these steps:
-
Download the PV migration script from the VAST GitHub space:
wget https://raw.githubusercontent.com/vast-data/vast-csi/v2.1/scripts/migrate-pv.py
-
Run the script on a host where
kubectl
is already installed and configured with the target Kubernetes cluster.-
If the old version of VAST CSI Driver has not been removed yet, use the following syntax:
migrate-pv.py [--vast-csi-namespace='<namespace>']
Specify the
--vast-csi-namespace
parameter if the namespace of the old version of VAST CSI Driver is not vast-csi. For example:>> python3 ./migrate-pv.py --vast-csi-namespace='kube-system'
-
If you have removed the old version of VAST CSI Driver and installed VAST CSI Driver 2.1.1, use the following syntax:
migrate-pv.py --force --root_export=<path> --vip_pool_name=<vip_pool_name>
Where:
-
--force
is a required parameter that indicates that the script is run after you have removed the old version of VAST CSI Driver and installed VAST CSI Driver 2.1.1. -
--root_export
identifies the storage path that was provided when the old version of VAST CSI Driver was deployed. This is the value you supplied at the “NFS Export Path” prompt. -
--vip_pool_name
identifies the VIP pool to be used for the PVs. Specify a VIP pool that has been defined in VAST Cluster. It does not have to be the same as in the original deployment. -
--mount_options
specifies custom NFS mount options. Enter a comma-separated list of options. Specify '' (empty string) if none are needed.
For example:
>> python3 ./migrate-pv.py --force --root_export=/data/k8s --vip_pool_name=vippool-1 --mount_options=''
-
-
-
Verify that the script output is similar to the following:
info: PV <pv_name> updated. info: Done.
Note
If an error was found in the user-supplied parameters, rerun the script with correct values specified.
After removal of VAST CSI Driver 2.1.x or 2.0.x, the existing containers will continue to run with the volumes provisioned by the removed VAST CSI Driver.
-
Remove all VAST storage classes:
kubectl delete sc vastdata-filesystem
-
Remove the VAST CSI driver:
kubectl delete csidriver csi.vastdata.com
-
Remove the VAST namespace:
kubectl delete all --all -n vast-csi
-
Remove the VAST ServiceAccounts:
kubectl delete sa csi-vast-controller-sa -n vast-csi kubectl delete sa csi-vast-node-sa -n vast-csi
-
kubectl delete secret csi-vast-mgmt -n vast-csi
-
kubectl delete ClusterRole csi-vast-attacher-role kubectl delete ClusterRole csi-vast-provisioner-role
-
Remove the VAST ClusterRoleBinding:
kubectl delete ClusterRoleBinding csi-resizer-role kubectl delete ClusterRoleBinding csi-vast-attacher-binding kubectl delete ClusterRoleBinding csi-vast-provisioner-binding
-
Remove the VAST VolumeSnapshotClass:
kubectl delete VolumeSnapshotClass vastdata-snapshot
To remove VAST CSI Driver 1.0.x, you'll need the YAML configuration file that was used to deploy VAST CSI Driver. You can generate the file using the 1.0.x container.
kubectl delete -f vast-csi-deployment.yaml
Comments
0 comments
Article is closed for comments.