How to Use Dynamic Storage Provisioning in Kubernetes


Here are the notes on how I enabled dynamic provisioning for persistent volumes in my garage Kubernetes cluster, using a combination of:

  • NFS server
  • CSI driver for NFS
  • PersistentVolumeClaim

NFS server

I used a typical installation of NFS server on Ubuntu Linux. Here’s the documentation from Ubuntu.

CSI driver for NFS

From the Github repo there are Helm templates provided to install csi-driver-nfs easily into a Kubernetes cluster. Since I rely on ArgoCD to deploy stuff in my cluster, I created an ArgoCD application for csi-driver-nfs:

# csi-driver-nfs.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: csi-driver-nfs
  namespace: argocd
spec:
  destination:
    namespace: kube-system
    server: https://kubernetes.default.svc
  project: default
  source:
    chart: csi-driver-nfs
    repoURL: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
    targetRevision: v4.9.0
    helm:
      values: |
        # avoid deploying onto master node
        controller:
          tolerations: [] 
        node:
          tolerations: []
        # create the default storage class using local NFS driver settings
        storageClass:
          create: true
          name: nfs-csi
          annotations:
            storageclass.kubernetes.io/is-default-class: "true"
          parameters:
            server: 192.168.1.101
            share: /var/nfs
          # set to Retain if files are to be preserved after the volume's deletion.
          reclaimPolicy: Delete
          volumeBindingMode: Immediate
          mountOptions:
            - nfsvers=4.1
  syncPolicy:
    automated:
      prune: true
    syncOptions:
      - CreateNamespace=true

Once checked in and deployed by ArgoCD, I confirmed it’s running properly

> k get storageclass -A
NAME                PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi (default)   nfs.csi.k8s.io   Delete          Immediate           false                  2d5h

PersistentVolumeClaim

Here’s an example PVC to create a volume dynamically

# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: raytest
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-csi

Let’s see how it works when deployed

> k apply -f pvc.yaml
persistentvolumeclaim/raytest created

> k get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
raytest   Bound    pvc-ef9ecd8f-2ca2-4b96-8dc1-c7bb2e5c96ea   10Gi       RWX            nfs-csi        <unset>                 9s

> k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS    VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-ef9ecd8f-2ca2-4b96-8dc1-c7bb2e5c96ea   10Gi       RWX            Delete           Bound    default/raytest                      nfs-csi         <unset>                          22s

# in the NFS server
> ls -lht /var/nfs
total 68K
drwxr-xr-x  2 nobody   nogroup  4.0K Oct 11 16:51 pvc-ef9ecd8f-2ca2-4b96-8dc1-c7bb2e5c96ea
...

🙂