How to test PR 652 directpv - cniackz/public GitHub Wiki

Objective:

Steps to validate PR 652

Steps:

  1. Destroy previous cluster
minikube stop
minikube delete
  1. Unmount mountpoints if needed:
sudo umount /var/lib/direct-csi/mnt/d04392dc-08cd-436f-aba7-c69c1ecbb304
sudo umount /var/lib/direct-csi/mnt/037d1212-ace0-42f3-9e28-ad548859921e
sudo umount /var/lib/direct-csi/mnt/8c84e520-14f8-488f-9430-978d5bbb33ae
sudo umount /var/lib/direct-csi/mnt/c7289816-26c6-4613-b07c-d1feeaceb7fc
  1. Cleanup LV setup:
sudo lvremove vg0 -y
sudo vgremove vg0 -y
sudo pvremove /dev/loop<n> /dev/loop<n> /dev/loop<n> /dev/loop<n> # n can be replaced with the loopbacks created
sudo losetup --detach-all
  1. Create your LVs
sudo truncate --size=1G /tmp/disk-{1..4}.img
for disk in /tmp/disk-{1..4}.img; do sudo losetup --find $disk; done
devices=( $(for disk in /tmp/disk-{1..4}.img; do sudo losetup --noheadings --output NAME --associated $disk; done) )
sudo pvcreate "${devices[@]}"
vgname="vg0"
sudo vgcreate "$vgname" "${devices[@]}"
for lvname in lv-{0..3}; do sudo lvcreate --name="$lvname" --size=800MiB "$vgname"; done
  1. Start minikube:
minikube start --driver=none
  1. Install old version of DirectPV:
(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
kubectl krew install directpv
kubectl directpv install

You should see old directcsi version:

ccelis@asus:~$ kubectl directpv install
W1121 19:14:57.270293  454791 installer.go:76] running on experimental version of kubernetes v1.24. This version is not officially supported yet.
I1121 19:14:57.274831  454791 ns.go:55] 'direct.csi.min.io' namespace created
I1121 19:14:57.284560  454791 rbac.go:352] 'direct.csi.min.io' rbac created
W1121 19:14:57.289548  454791 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
I1121 19:14:57.290944  454791 psp.go:128] 'direct.csi.min.io' podsecuritypolicy created
I1121 19:15:00.108506  454791 conversion_secret.go:168] 'direct.csi.min.io' conversion webhook secrets created
I1121 19:15:00.146238  454791 crd.go:135] crds successfully registered
I1121 19:15:00.153232  454791 csidriver.go:148] 'direct.csi.min.io' csidriver created
I1121 19:15:00.164867  454791 storageclass.go:48] 'direct.csi.min.io' storageclass created
I1121 19:15:00.170487  454791 service.go:39] 'direct.csi.min.io' service created
I1121 19:15:00.176947  454791 daemonset.go:41] 'direct.csi.min.io' daemonset created
I1121 19:15:00.191058  454791 deployment.go:294] 'direct.csi.min.io' deployment created
  1. format all drives
kubectl directpv drives format --all
  1. List all drives until ready:
kubectl directpv drives list
  1. Install MinIO:
kind: Service
apiVersion: v1
metadata:
  name: minio
  labels:
    app: minio
spec:
  selector:
    app: minio
  ports:
    - name: minio
      port: 9000

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: minio
  labels:
    app: minio
spec:
  serviceName: "minio"
  replicas: 4
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
        direct.csi.min.io/organization: minio
        direct.csi.min.io/app: minio-example
        direct.csi.min.io/tenant: tenant-1
    spec:
      containers:
      - name: minio
        image: minio/minio
        env:
        - name: MINIO_ACCESS_KEY
          value: minio
        - name: MINIO_SECRET_KEY
          value: minio123
        volumeMounts:
        - name: minio-data-1
          mountPath: /data1
        - name: minio-data-2
          mountPath: /data2
        - name: minio-data-3
          mountPath: /data3
        - name: minio-data-4
          mountPath: /data4
        args:
        - "server"
        - "http://minio-{0...3}.minio.default.svc.cluster.local:9000/data{1...4}"
  volumeClaimTemplates: # This is the specification in which you reference the StorageClass
  - metadata:
      name: minio-data-1
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi    
      storageClassName: directpv-min-io # This field references the existing StorageClass
  - metadata:
      name: minio-data-2
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi  
      storageClassName: directpv-min-io # This field references the existing StorageClass
  - metadata:
      name: minio-data-3
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi
      storageClassName: directpv-min-io # This field references the existing StorageClass
  - metadata:
      name: minio-data-4
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Mi 
      storageClassName: directpv-min-io # This field references the existing StorageClass
  1. Test our new DirectPV Branch devel and test migrate to get the YAMLs:
cd $GOPATH/src/github.com/minio/directpv
go build -v ./...
./build.sh
docker build -t quay.io/cniackz4/directpv:thunov172211am .
./kubectl-directpv --kubeconfig /home/ccelis/.kube/config install \
--image directpv:thunov172211am --org cniackz4 --registry quay.io
./kubectl-directpv --kubeconfig /home/ccelis/.kube/config migrate
  1. You should see the YAMLs per Object:
ccelis@asus:~/go/src/github.com/minio/directpv$ ls volume-*
volume-10.yaml  volume-12.yaml  volume-14.yaml  volume-16.yaml  volume-2.yaml  volume-4.yaml  volume-6.yaml  volume-8.yaml
volume-11.yaml  volume-13.yaml  volume-15.yaml  volume-1.yaml   volume-3.yaml  volume-5.yaml  volume-7.yaml  volume-9.yaml
ccelis@asus:~/go/src/github.com/minio/directpv$ ls drive-*
drive-1.yaml  drive-2.yaml  drive-3.yaml  drive-4.yaml

Note:

  • If you test over a new version like devel no old object will be located, hence no YAML output will be obtained.
  • It is important to follow the proper process, where migration will apply only for legacy users and it shouldn't affect newer version users.