How to have only one disk offline in Kubernetes - cniackz/public GitHub Wiki

Steps:

  1. Get the name of the Pod: main-storage-pool-0-3
$ k get pod -n ns-3
NAME                        READY   STATUS    RESTARTS         AGE
main-storage-pool-0-0       1/1     Running   0                15h
main-storage-pool-0-1       1/1     Running   0                15h
main-storage-pool-0-2       1/1     Running   0                15h
main-storage-pool-0-3       1/1     Running   0                15h
main-storage-prometheus-0   2/2     Running   0                139d
ubuntu                      1/1     Running   10 (5d18h ago)   75d
  1. Get the Claim: data0-main-storage-pool-0-3
$ k get pvc -n ns-3 | grep main-storage-pool-0-3
data0-main-storage-pool-0-3                         Bound    pvc-4a2be4a2-fcfa-46d6-ba7e-ae5bfebc73f6   1Ti        RWO            directpv-min-io   140d
data1-main-storage-pool-0-3                         Bound    pvc-824317ad-aadb-4050-b7b0-139f087d831c   1Ti        RWO            directpv-min-io   140d
data10-main-storage-pool-0-3                        Bound    pvc-f1e3cb4d-0a32-496c-b436-23d5a2b5410e   1Ti        RWO            directpv-min-io   140d
data11-main-storage-pool-0-3                        Bound    pvc-44c7d93e-002e-4a92-aeb0-33fff016dcb1   1Ti        RWO            directpv-min-io   140d
data2-main-storage-pool-0-3                         Bound    pvc-32b5e625-e63c-4e9a-97a3-5cf12646b0d7   1Ti        RWO            directpv-min-io   140d
data3-main-storage-pool-0-3                         Bound    pvc-fdd43320-b816-4e1e-a576-97151e1968a6   1Ti        RWO            directpv-min-io   15h
data4-main-storage-pool-0-3                         Bound    pvc-a6ac71b1-8128-4936-b8a8-0f989bd3c968   1Ti        RWO            directpv-min-io   140d
data5-main-storage-pool-0-3                         Bound    pvc-a92d429d-2908-4666-938f-a65637d202ef   1Ti        RWO            directpv-min-io   140d
data6-main-storage-pool-0-3                         Bound    pvc-da263c62-6690-4f75-8e33-7e35e84a3198   1Ti        RWO            directpv-min-io   140d
data7-main-storage-pool-0-3                         Bound    pvc-3157904c-8c8c-46cb-8905-8aba0e66e531   1Ti        RWO            directpv-min-io   140d
data8-main-storage-pool-0-3                         Bound    pvc-670e1c7b-98c6-4023-abcb-bf3baa00c518   1Ti        RWO            directpv-min-io   140d
data9-main-storage-pool-0-3                         Bound    pvc-3cb66135-28ef-4eb4-9793-1aeccad42463   1Ti        RWO            directpv-min-io   140d
  1. Get the Volume: pvc-4a2be4a2-fcfa-46d6-ba7e-ae5bfebc73f6
$ k get pv | grep data0-main-storage-pool-0-3
pvc-4a2be4a2-fcfa-46d6-ba7e-ae5bfebc73f6   1Ti        RWO            Delete           Bound    ns-3/data0-main-storage-pool-0-3                                                           directpv-min-io                                                           140d
  1. Get the Drive: 76e5427b-9b44-4151-85ce-2077e18a1d15
minio@minio-k8s17:~$ kubectl directpv volumes list --pod-name main-storage-pool-0-3 -o wide
 VOLUME                                    CAPACITY  NODE         DRIVE     PODNAME                PODNAMESPACE    DRIVENAME                            
 pvc-3157904c-8c8c-46cb-8905-8aba0e66e531  1.0 TiB   minio-k8s17  nvme1n1   main-storage-pool-0-3  ns-3            9c83fe3e-5534-42f9-83be-651862a3bd84 
 pvc-32b5e625-e63c-4e9a-97a3-5cf12646b0d7  1.0 TiB   minio-k8s17  nvme11n1  main-storage-pool-0-3  ns-3            625752e7-c3e9-4909-9079-228959c09b76 
 pvc-3cb66135-28ef-4eb4-9793-1aeccad42463  1.0 TiB   minio-k8s17  nvme7n1   main-storage-pool-0-3  ns-3            7167cb35-544f-4a1b-bb64-d7a76ce7f4ad 
 pvc-44c7d93e-002e-4a92-aeb0-33fff016dcb1  1.0 TiB   minio-k8s17  nvme6n1   main-storage-pool-0-3  ns-3            1ce3e9a5-0848-499a-9595-82476b667ee6 
 pvc-4a2be4a2-fcfa-46d6-ba7e-ae5bfebc73f6  1.0 TiB   minio-k8s17  nvme10n1  main-storage-pool-0-3  ns-3            76e5427b-9b44-4151-85ce-2077e18a1d15 
 pvc-670e1c7b-98c6-4023-abcb-bf3baa00c518  1.0 TiB   minio-k8s17  nvme2n1   main-storage-pool-0-3  ns-3            edd38fe2-4207-4d29-b3cf-affa1f8614fc 
 pvc-824317ad-aadb-4050-b7b0-139f087d831c  1.0 TiB   minio-k8s17  nvme12n1  main-storage-pool-0-3  ns-3            87e3e38d-7cc9-453c-b142-3e20f1ed6d66 
 pvc-a6ac71b1-8128-4936-b8a8-0f989bd3c968  1.0 TiB   minio-k8s17  nvme9n1   main-storage-pool-0-3  ns-3            40277f70-0984-439f-81c4-2a91d00dab73 
 pvc-a92d429d-2908-4666-938f-a65637d202ef  1.0 TiB   minio-k8s17  nvme5n1   main-storage-pool-0-3  ns-3            2267f00b-cffc-4b60-a8b4-eccfa14ad58d 
 pvc-da263c62-6690-4f75-8e33-7e35e84a3198  1.0 TiB   minio-k8s17  nvme8n1   main-storage-pool-0-3  ns-3            416c99c0-c946-4104-99b1-814ad7372786 
 pvc-f1e3cb4d-0a32-496c-b436-23d5a2b5410e  1.0 TiB   minio-k8s17  nvme4n1   main-storage-pool-0-3  ns-3            73df468a-0b01-4717-9cce-37785c015c7c 
 pvc-fdd43320-b816-4e1e-a576-97151e1968a6  1.0 TiB   minio-k8s17  nvme1n1   main-storage-pool-0-3  ns-3            9c83fe3e-5534-42f9-83be-651862a3bd84 
  1. Get the mount point: /var/lib/direct-csi/mnt/76e5427b-9b44-4151-85ce-2077e18a1d15
minio@minio-k8s17:~$ lsblk | grep 76e5427b-9b44-4151-85ce-2077e18a1d15
                                      /var/lib/direct-csi/mnt/76e5427b-9b44-4151-85ce-2077e18a1d15
  1. Delete .minio.sys folder from the mount point permanently so that heal never heal it:
while true
do
  echo "# loop infinitely"
  rm -rf /var/lib/direct-csi/mnt/76e5427b-9b44-4151-85ce-2077e18a1d15/pvc-4a2be4a2-fcfa-46d6-ba7e-ae5bfebc73f6/.minio.sys
done
  1. Look at the Metrics and find bugs: