Storage: PV PVC's QA - q-uest/notes-doc-k8s-docker-jenkins-all-else GitHub Wiki

QA's on PV/PVC

  1. Can you delete a PV which is still in use by a Pod?

    No, you can not. Kubernetes uses the "Storage Object in Use Protection feature" to prevent it from happening. The purpose of the this feature is to ensure that PersistentVolumeClaims (PVCs) in active use by a Pod and PersistentVolume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss.

  2. What will happen if you delete a PVC when it is in use by a Pod (i.e. active Pod which is pointing to it)?

    It won't delete the PVC rightaway, it sets the status of the PVC as "Terminating" but waits until the Pod is deleted. You could check the PVC's status by "describe"-ing it. The PVC would be deleted as soon as the Pod which was using it got deleted.

  3. Would it be possible to recover data from a deleted PVC by mistake?

    It is possible only if the "Reclaim Policy" of the PV which was used by the deleted PVC was "Retain". With the "Retain" policy, when the PersistentVolumeClaim (PVC) is deleted, the PersistentVolume still exists (the associated storage asset still exists in its external infrastructure such as AWS EBS, GCE PE, Azure Disk etc.) and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.

  4. How do you manually recover data from a deleted PVC? (the PVC's PV has "Reclaim Policy"="Retain")?

    Delete the PV, the PVC was bound to, and create a new one with the same storage asset definition. You could see all the existing data remain there.

  5. What option would you go with, if you want to remove the volume and do not need any data on it?

    Choose the Reclaim Policy "Delete". The underlying volumes' plugins support this feature, this removes the PersistentVolume object from Kubernetes, as
    well as the associated storage asset in the external infrastructure.

  6. In what way, while creating a PV only you can ensure it is to be reserved for the intended PVC only that you will create?

    Specify the relevant PersistentVolumeClaim in the claimRef field of the PV so that other PVCs can not bind to it. This is useful if you want to consume PersistentVolumes that have their claimPolicy set to Retain, including cases where you are reusing an existing
    PV

  7. How do you expand a storage volume of NFS type?

    First of all, the Storage provisioner being used should support the expand operation. If it does, the prerequisites are:

    The steps to be performed,

    1. The NFS StorageClass & BackendStorageClass must support volume expansion, which can configure by editing the StorageClass definition to have "allowVolumeExpansion: true".

    2. Resize NFS volume by editing backend PVC spec.resources.requests.storage to reflect the newly desired size, which must be greater than the original size.

    3. Wait till success expansion of backend PVC i.e .status.capacity.storage should match with spec.resources.requests.storage.

    4. Update the NFS PV capacity manually, if necessary.

    5. Update the NFS PVC capacity.

    Example: https://github.com/openebs/dynamic-nfs-provisioner/blob/develop/docs/tutorial/nfs-volume-resize.md.

  8. How do you recover a PVC if the "expand" operation against an underlying storage has failed?

    If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.

    • Mark the PersistentVolume(PV) that is bound to the PersistentVolumeClaim(PVC) with Retain reclaim policy.
    • Delete the PVC. Since PV has Retain reclaim policy - we will not lose any data when we recreate the PVC.
    • Delete the claimRef entry from PV specs, so as new PVC can bind to it. This should make the PV Available.
    • Re-create the PVC with smaller size than PV and set volumeName field of the PVC to the name of the PV. This should bind new PVC to existing PV.
    • Don't forget to restore the reclaim policy of the PV.
  9. The PV's volume access modes, such as ReadWriteOnce, ReadOnlyMany, ReadWriteMany & ReadWriteOncePod are set at what level? Node or Pod?

    While the PV's with ReadWriteMany & ReadOnlyMany could be mounted and used by multiple nodes, the ReadWriteOnce could be used only by a single node which may have multiple pods pointing to the PV.

    ReadWriteOncePod works at a Pod level and could be mounted and used by a single Pod in the cluster.

  10. What is the use case for access modes, "ReadWriteOnce" & "ReadWriteMany"?

    ** NEED TO CHECK**

  11. How to ensure a particular PVC is read & write by a single Pod in a cluster?

    Setting the accessmode of a PV to "ReadWriteOncePod" ensures the volume can be mounted as read-write by a single Pod only. Use this, if you want to ensure that only one pod across whole cluster can read that PVC or write to it.

  12. Can you find out whether a Pod is using a PVC and what it's name? The field - "Used By" from the below command will have the Pod's name which is currently using the PVC.

    kubectl describe pvc <PVC-NAME>
    
  13. What will happen if you try to create a PV pointed to a non-existing path (in hostPath)?

It does NOT throw any errors but the PV gets created. But, at the time of creating pods in the PV/C only, it will throw the error, for example, in case if it is not able to create the path, like the below:

"Error: failed to generate container "1285929e2fa2f3f99054364fd2961d95aac35c66ce01433c6fed1e6b e8302920" spec: failed to generate spec: failed to mkdir "/data/mongo": mkdir /data: read-only file system"

⚠️ **GitHub.com Fallback** ⚠️