Examples QA's on PersistentVolumes [PV], PersistentVolumeClaims [PVC] With NFS Provisioners (Helm) - q-uest/notes-doc-k8s-docker-jenkins-all-else GitHub Wiki
if you want to create NFS fileystems: https://github.com/q-uest/Notes-Jenkins/wiki/Create-NFS-mount-in-Cloud
Note:
-
StorageClassName should be provided for both, PV & PVC and it should match.
-
The (host) Path with a PV is created, if it does not exist on the host machine at the time of creating a Pod whose PVC is based on the PV.
===
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
tar -xvf nfs-subdir-external-provisioner-4.0.16.tgz
/* ** Each NFS mount/volume needs to have its own helm release (with its own StorageClassName etc). Edit "values.yaml" to change the StorageClassName**, and provide a different release name (helm) if it is the second time you are installing the provisioner for another filesystem */
If/When you want to setup provisioner for multiple volumes/mounts, pass the required variable values as below using the same path where you created your first provisioner as shown in the below example:
with out any Resource Requests/Limits (Default):
helm install jenkins-prov nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=10.138.0.7 --set nfs.path=/jenkins-data --set storageClass.name=jenkins-sc -f values.yaml
Pass Resource Limits at command line:
helm install jenkins-prov nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=10.138.0.7 --set nfs.path=/jenkins-data --set storageClass.name=jenkins-sc --set resources.limits.cpu=100m --set resources.limits.memory=200Mi -f values.yaml
Edit the below as required to include Resource limits/requests:
helm install sonarqube-prov nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=10.138.0.7 --set nfs.path=/database-data/sonar --set storageClass.name=sonarqube-sc -f values.yaml
helm install postgres-prov nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=10.138.0.7 --set nfs.path=/database-data/postgresql --set storageClass.name=postgres-sc -f values.yaml
helm install nexus-prov nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=10.138.0.7 --set nfs.path=/nexus-data --set storageClass.name=nexus-sc -f values.yaml
kubectl describe sc nexus-sc
Name: nexus-sc
IsDefaultClass: No
Annotations: meta.helm.sh/release-name=nexus-prov,meta.helm.sh/release-namespace=default
Provisioner: cluster.local/nexus-prov-nfs-subdir-external-provisioner
Parameters: archiveOnDelete=true
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
The following kubernetes objects are created to support a provisioner:
kubectl get all |grep nexus
NAME READY STATUS RESTARTS AGE
pod/nexus-prov-nfs-subdir-external-provisioner-64bccf6ff8-pn8bp 1/1 Running 0 7m52s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nexus-prov-nfs-subdir-external-provisioner 1/1 1 1 7m52s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nexus-prov-nfs-subdir-external-provisioner-64bccf6ff8 1 1 1 7m52s
Here is the NFS Deployment's container Spec which supports a particular NFS Volume. It contains the details such as NFS server and its volume/path to support. Each NFS volume which needs to be used will have its own set of kubernetes objets like above to support it. Hence, you would need to create another release with its different NFS server/path details, if you want to support another NFS volume/mount.
containers:
- name: nfs-subdir-external-provisioner
image: "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2"
imagePullPolicy: IfNotPresent
securityContext:
{}
volumeMounts:
- name: nfs-subdir-external-provisioner-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: cluster.local/test-prov-nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.138.0.7
- name: NFS_PATH
value: /nexus-data
volumes:
- name: nfs-subdir-external-provisioner-root
nfs:
server: 10.138.0.7
path: /nexus-data
Create PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
namespace: jenkins
spec:
storageClassName: test-sc
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /nexus-data/test/mypod
-
In the above, the given path - "/nexus-data/test/mypod" is NFS mounted & the given storageClassName - "test-sc" is using the "NFS Provisioner" named "test-prov-nfs-subdir-external-provisioner" (there is a deployment object with the same name exists in K8s).
-
The specified (host)path will be created, if it does NOT exist already, when a Pod is getting created (which will be using this PV/PVC). The directory has the following permissions set:
drwxr-xr-x 2 root root 4096 Mar 28 03:29 mypod
====
PVC: (Provide the "volumeName" as the PV created above)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
volumeName: test-pv
storageClassName: test-sc
Pod:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 0
fsGroup: 100
volumes:
- name: testvol
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: sec-ctx-demo
image: ubuntu:latest
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: testvol
mountPath: /ram
Connect to the Pod & Check:
>> kubectl exec -it security-context-demo -- /bin/bash
>> df -h /ram
Filesystem Size Used Avail Use% Mounted on
10.138.0.7:/nexus-data/test/mypod 20G 273M 20G 2% /ram
>> ls -l / |grep ram
drwxr-xr-x 2 root root 4096 Mar 28 13:03 ram
>> touch /ram/f1
>> ls -l /ram/f1
-rw-r--r-- 1 root root 0 Mar 28 13:16 /ram/f1
Just removing of the "volumeName" parameter from the PVC spec used above will make it dynamic.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
storageClassName: test-sc
PV/PVC's created:
>> kubectl get pvc |grep test-pv
test-pvc Bound pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7 1Gi RWO test-sc 9s
>> kubectl get pv |grep pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7 1Gi RWO Delete Bound jenkins/test-pvc test-sc 57s
Describing the PV.....
kubectl describe pv pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
Name: pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: cluster.local/test-prov-nfs-subdir-external-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: test-sc
Status: Bound
Claim: jenkins/test-pvc
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.138.0.7
Path: /nexus-data/jenkins-test-pvc-pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
ReadOnly: false
Events: <none>
You get to see the new directory created automatically in the source's "Path" in the above output.
>> ls -l /nexus-data|grep jenkins-test-pvc-pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
drwxrwxrwx 2 root root 4096 Mar 28 13:19 jenkins-test-pvc-pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7
From the Pod:
>> df -h /ram
Filesystem Size Used Avail Use% Mounted on
10.138.0.7:/nexus-data/jenkins-test-pvc-pvc-244093f5-a0e1-4b45-8d26-5a80f11457a7 20G 273M 20G 2% /ram
>> touch /ram/t1
>> ls -l /ram/t1
-rw-r--r-- 1 root root 0 Mar 28 13:24 /ram/t1