How add a pool and expand the capacity - cniackz/public GitHub Wiki

Objective:

To add a pool and expand capacity

Diagram:

Screenshot 2023-03-29 at 8 14 13 AM

Steps:

  1. Create a cluster with 8 nodes, in the example using kind config file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: "127.0.0.1"
  apiServerPort: 6443
nodes:
  - role: control-plane
    extraPortMappings:
    - containerPort: 30080
      hostPort: 30080
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30081
      hostPort: 30081
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30082
      hostPort: 30082
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30083
      hostPort: 30083
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30084
      hostPort: 30084
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30085
      hostPort: 30085
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30086
      hostPort: 30086
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30087
      hostPort: 30087
      listenAddress: "127.0.0.1"
      protocol: TCP
  - role: worker
    extraPortMappings:
    - containerPort: 30088
      hostPort: 30088
      listenAddress: "127.0.0.1"
      protocol: TCP
  • Put Labels in Nodes pool: zero for first 4 nodes & pool: one for the rest of the nodes.
kubectl label nodes kind-worker  pool=zero
kubectl label nodes kind-worker2 pool=zero
kubectl label nodes kind-worker3 pool=zero
kubectl label nodes kind-worker4 pool=zero
kubectl label nodes kind-worker5 pool=one
kubectl label nodes kind-worker6 pool=one
kubectl label nodes kind-worker7 pool=one
kubectl label nodes kind-worker8 pool=one
  1. Install Operator:
k apply -k github.com/minio/operator/resources/\?ref\=v5.0.3
  1. Install a Tenant with one Pool and assign its pool-0 to pool: zero nodes:
  pools:
  - name: pool-0
    nodeSelector:
      pool: zero
  1. Expand tenant by adding a new pool:
k edit tenant -n tenant-lite
  - affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: v1.min.io/tenant
              operator: In
              values:
              - myminio
            - key: v1.min.io/pool
              operator: In
              values:
              - pool-1
          topologyKey: kubernetes.io/hostname
    name: pool-1
    nodeSelector:
      pool: one
    resources: {}
    runtimeClassName: ""
    servers: 4
    volumeClaimTemplate:
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "2147483648"
        storageClassName: standard
      status: {}
    volumesPerServer: 2

Expected to see the tenant edited meaning it was actually performed:

$ kubectl edit tenant -n tenant-lite
tenant.minio.min.io/myminio edited
  1. Then just verify the desired installation took place:
$ k get pods -n tenant-lite -o wide
NAME               READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
myminio-pool-0-0   1/1     Running   0          12h   10.244.7.5    kind-worker    <none>           <none>
myminio-pool-0-1   1/1     Running   0          12h   10.244.5.5    kind-worker3   <none>           <none>
myminio-pool-0-2   1/1     Running   0          12h   10.244.4.10   kind-worker2   <none>           <none>
myminio-pool-0-3   1/1     Running   0          12h   10.244.8.13   kind-worker4   <none>           <none>
myminio-pool-1-0   1/1     Running   0          12h   10.244.3.10   kind-worker8   <none>           <none>
myminio-pool-1-1   1/1     Running   0          12h   10.244.6.15   kind-worker6   <none>           <none>
myminio-pool-1-2   1/1     Running   0          12h   10.244.2.7    kind-worker5   <none>           <none>
myminio-pool-1-3   1/1     Running   0          12h   10.244.1.10   kind-worker7   <none>           <none>
⚠️ **GitHub.com Fallback** ⚠️