Helm Charts Level 2 - q-uest/notes-doc-k8s-docker-jenkins-all-else GitHub Wiki

Testing how changes done to a deployment/pod directly (not through Helm) affects the next Helm Release

Background:

In a k8's environment where Helm was used to rollout a release, a user (by mistake) added one more container to the pod directly through kubectl. This test shows how this change will be dealt by Helm while rolling out a new change/release.

  • Create/Install Helm Chart

helm create rel1 helm install rel1 ./rel1

  • Check k8's objects
 kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/rel1-5968fcc6dd-nz7m8   1/1     Running   0          45s

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/rel1   ClusterIP   10.109.46.217   <none>        80/TCP    45s

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rel1   1/1     1            1           45s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/rel1-5968fcc6dd   1         1         1       45s

  • Update deployment.yaml (of the release) and add another container & execute it straight via kubectl:
        - name: testhelm
          securityContext:
            {}
          image: "ubuntu:latest"
          imagePullPolicy: IfNotPresent
          command: ["sleep"]
          args: ["2000000"]
kubectl apply -f deployment.yaml

  • Check the objects now:
kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/rel1-66b75ffcb4-gnpp4   2/2     Running   0          5m38s

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/rel1   ClusterIP   10.109.46.217   <none>        80/TCP    9m46s

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rel1   1/1     1            1           9m46s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/rel1-5968fcc6dd   0         0         0       9m46s
replicaset.apps/rel1-66b75ffcb4   1         1         1       5m38s

The "READY" column shows there are 2 containers in "Running" state now.

Upon describing the pod, the second container added could be seen:

Containers:
  rel1:
    Container ID:   docker://e0549469e9e5afe5b5468e2443e134bc7d1c21f99312bc5c7adcc68c0854104f
    Image:          nginx:1.16.0
    Image ID:       docker-pullable://nginx@sha256:3e373fd5b8d41baeddc24be311c5c6929425c04cabf893b874ac09b72a798010
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 14 Feb 2022 10:21:24 +0530
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ww26m (ro)
  testhelm:
    Container ID:  docker://a2d04bf3b9a584072a7047a2c33029aa93e8cb541c52bb61b04477b832c7b3df
    Image:         ubuntu:latest
    Image ID:      docker-pullable://ubuntu@sha256:669e010b58baf5beb2836b253c1fd5768333f0d1dbcb834f7c07a4dc93f474be
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
    Args:
      2000000
    State:          Running
      Started:      Mon, 14 Feb 2022 10:21:24 +0530
    Ready:          True
  • Change/Upgrade the Helm Release:

Update port in values.yaml, from 80 to 90 and upgrade the Release.

helm upgrade rel1 ./rel1
  • Check the pod to see whether the added 2nd container (testhelm) remains there and overall status:
kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/rel1-66b75ffcb4-gnpp4   2/2     Running   0          20m

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/rel1   ClusterIP   10.109.46.217   <none>        90/TCP    24m

NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rel1   1/1     1            1           24m

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/rel1-5968fcc6dd   0         0         0       24m
replicaset.apps/rel1-66b75ffcb4   1         1         1       20m

Note: The replicaset of the first (Previous) revision could be seen there with 0 pods.

The updated port number is changed to 90 now.

  • Describe the pod to see whether the direct change of adding the 2nd container (testhelm) is still there:
Containers:
  rel1:
    Container ID:   docker://e0549469e9e5afe5b5468e2443e134bc7d1c21f99312bc5c7adcc68c0854104f
    Image:          nginx:1.16.0
    Image ID:       docker-pullable://nginx@sha256:3e373fd5b8d41baeddc24be311c5c6929425c04cabf893b874ac09b72a798010
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 14 Feb 2022 10:21:24 +0530
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ww26m (ro)
  testhelm:
    Container ID:  docker://a2d04bf3b9a584072a7047a2c33029aa93e8cb541c52bb61b04477b832c7b3df
    Image:         ubuntu:latest
    Image ID:      docker-pullable://ubuntu@sha256:669e010b58baf5beb2836b253c1fd5768333f0d1dbcb834f7c07a4dc93f474be
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
    Args:
      2000000
    State:          Running
      Started:      Mon, 14 Feb 2022 10:21:24 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ww26m (ro)

⚠️ **GitHub.com Fallback** ⚠️