Q A's - q-uest/notes-doc-k8s-docker-jenkins-all-else GitHub Wiki

#PV#PVC#PERSISTENT VOLUME#PERSISTENT VOLUME CLAIMS#

QA's related to Storage/PV/PVC's

#secret#

  1. A key's value of a secret object has just been updated/changed. Can you get this updated value from a Pod which is already using the secret object?

    No, it is not possible. The Pod needs to be recreated after updating the secret object.

    #pod behaviour#restart pod#

  2. Can you restart a failed POD in kubernetes?

    Can not be done directly.

    Delete and recreate if yaml file is available.

    if the pod is not created as a result of a ReplicaSet or a Deployment object and the pod's spec is not available, use the below method:

    • Extract the spec (yaml) & use replace command to recreate it as below:
    kubectl get pod/sonarqube-ui-test -o yaml| kubectl replace --force -f -
    
  3. What is init containers in Kubernetes?

    https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ The containers that are started up to complete certain prerequisite conditions before starting up those regular/app containers in a POD.

    The one or more init containers will run sequentially. Each init container must succeed before the next can run. When all of the init containers have run to completion, kubelet initializes the application containers for the pod and runs them as usual. The usage examples of init containers:

    1. Starting the application containers only after confirming the required database & application services are started up.
    2. Clone the static web application files from a SCM like Github to a shared volumes before starting up the web server container to serve the same.
  4. Are existing labels assigned to a Pod are modifiable? yes, they're modifiable with "--overwrite" option as in the below example.

    kubectl label po kubia-manual-v2 env=debug --overwrite

  5. What is canary release?

    A canary release is when you deploy a new version of an application next to a stable version, and only let a small fraction of users hit the new version to see how it behaves befor rolling out to all users. This prevents a bad release from being exposed to too many users.

  6. How do you list pods that do not have a particular label,for example env included?

    kubectl get po -l "|env"

  7. How do you list all the pods categerized by providing 2 different values using labels?

    kubectl get po -l env in (prod,devel) 
    
  8. How do you schedule a pod on specific nodes?

    Nodes also could be labeled like Pods. So, adding a label to the concerned nodes and specifying it in the "nodeSelector" clause while creating the pod will do.

  9. How do you schedule a pod to a specific node?

    Each node has a unique label with the key kubernetes.io/hostname and value set to the actual hostname of the node. But, setting the nodeSelector to a specific node by using this may lead to pod being unscheduleable if the node is offline. Hence, it is recomended to use label selectors which logically group the nodes.

  10. Can annotations be used in place labels? No, though annotations are similar to labels but they are not mean for that. Annotations can hold much larger information and primarily meant to be used by tools.

  11. What is the need for namespaces?

    It is used to split complex systems with numerous components into smaller distinct groups. They can also be used to separate resources in a multi-tenant environment, splitting up resources into production,development, and QA environments, or in any other way needed.

  12. What are those different mechanisms that Kubernetes adopt to do liveness probe against a container?

    a) HTTP GET : Performs an HTTP GET request on the container's IP address, a port and path. If it receives a response and the response code does not represent an error, the probe is considered successful. b) TCP socket probe: If the TCP connection to the specied port of the container is established successfully, the probe is successful. c) EXEC Probe: Executes an arbitrary command inside the container and checks the command's exit status code. If it is 0, the probe is successful, else it's considered as a failure.

  13. What are the additional properties that can be used for liveness probe?

    The main ones are DELAY, TIMEOUT,PERIOD, FAILURE & initialDelaySeconds. For example, Setting the DELAY=0 dictates the probing begins immediately after the container is started. The TIMEOUT=1 dictates the container must returns the response in 1 second, otherwise the probe is considered as a failure. The PERIOD=10 dictates the probe should occur for every 10 seconds. The FAILURE=3 dictates, the container gets restarted after 3 consecutive failures. And, the INITIALDELAYSECONDS to set intial delay, to avoid the starting of probing as soon as the container starts .

  14. How do you obtain the log of a crashed container?

    kubectl logs <POD> --previous

  15. What happens to the existing Pods when you change the label selector such that those Pods fall out of the scope of a replication controller?

    The replication controller stops caring about those pods. It will continue to create and manage the pods as per the changed label.

  16. What are the use cases for Daemon set?

    Log collector and resource monitor on every node.

  17. What do you do if you want to run a batch job for every 30 minutes?

    Create a cronjob (kind) providing schedule in the spec like how a typical cronjobs are setup in Linux and all.

  18. Is it possible to make a service to support multiple port numbers?

    Yes, it is. The required ports that a Pod listens on could be configured to support in a service.

  19. What is the advantage of defining names for ports in a service?

    It enables you to change the port number in a Pod without having to change the service spec.

    Example:

    apiVersion: v1  
    kind: Service  
    metadata:  
      name: kubia  
      spec:  
        ports:  
        - name: http  
          port: 80  
          targetPort: http  
        - name: https  
          port: 443  
          targetPort: https 
    
  20. How do you use your service with FQDN to access your application, instead of using Service's Cluster-IP?

    Use the following format: <SERVICE_NAME>. Look for domain suffixes inside a container in /etc/resolv.conf. example: kubia.default.svc.cluster.local Use it as below in a command:

    kubectl exec -it kubiars-ltpmv -- curl  http://kubia.default.svc.cluster.local 
    
  21. Why a curl command works with the service name & domain suffix combo while ssh or ping do not work ?

    It is because the service's cluster IP is virtual IP, and only has meaning when it is combined with the service port.

  22. What happens when a new Pod/ReplicaSet is created with the same selector that an existing service is supporting? For example, while a service was configured to support a replicaset with 2 replicas having selector label as "myname: kubia", another replicaset with 2 replicas with the same label is configured.

    The service goes by the selector label and will start supporting the new replicasets too. It will include the end points of both the replicasets and eventually will have 4 end points. Hence, we should exercise caution while providing selector labels to the objects.

  23. How will the front-end pods connect/reach to the backend database pod?

    The database backend pod is exposed by a service which is used by the frontend pods. The front-end pods will get the IP address and port of the backend service via environment variables or DNS lookups. The DNS is configured in /etc/resolv.conf file inside the pod's container.

  24. How a service redirects the incoming connections to the corresponding pods?

    Although the pod selector is configured in the service spec, it's not used directly when redirecting incoming connections. Instead, the selector is used to build a list of IPs and ports, which is then stored in the endpoints resource. When a client connects to the service, the service proxy selects one of those IP and port pairs and redirects to the corresponding server.

  25. Is it possible to create a service without pod selector?

    Yes, in that case, you will need to create the Endpoints resource and include the list of endpoints the service would need to support.

    A service without pod selector:

    
      apiVersion: v1  
      kind: Service  
      metadata:  
        name: external-service  
      spec:  
        ports:  
        - port: 80 
    

    The Endpoint resource having the same name as the service would look like as below:

    apiVersion: v1  
    kind: Endpoints  
    metadata:  
      name: external-service  
    subsets:  
       - addresses:  
           - ip: 11.11.11.11  
           - ip: 22.22.22.22  
         ports:  
          - port: 80 
      ```
    
    
  26. What are the ways to expose your services, for example your web application, to the external world?

    1. NodePort 2) LoadBalancer 3) Ingress resource.

    A LoadBalancer service is nothing but a NodePort service with a load balancer.

  27. How does the NodePort service work?

    Each node in the cluster opens a port on it and redirects traffic to the underlying service.

    There is no guarantee that NodePort/LoadBalancer services would direct the incoming connections to the pod's running on the same node. Is there a way to use the Pod's on the same node, instead of letting it making another hop to pod's on different nodes?

    Including the "externalTrafficPolicy: Local" in a service definition will do. When set, the service proxy will choose the locally running pod for the external connection opened through the service's node port. But, if there is no local pod's, the connection will hang. Therefore, you need to ensure that the LoadBalancer forwards connections only to nodes that has atleast one such pod.

  28. What are the drawbacks of setting "externalTrafficPolicy: Local"?

    1. The connection distribution will occur on node basis only now, rather than pod's. It will be fine if all the nodes in the cluster have equal number of pod's only. Otherwise, the load won't be spread evenly across the Pod's as intended.
    2. It does not preserve the source IP of the client the connections originating from. Hence, it does not suit to the applications that are logging the source IP.
  29. What is the use of readiness probe? When a container is started, Kubernetes can be configured to wait for a configurable amount of time to pass before performing the first readiness check. After that, it invokes the probe periodically and acts based on the result of the readiness probe. If a pod reports that it's not ready, it's removed from the service. When the pod becomes ready again, it's re-added.

  30. What is the difference between readiness probe and liveness probe?

    If a pod's readiness probe fails, the pod is removed from the Endpoints object so that it won't have to handle any requests. If a liveness probe fails, it will kill the unhealthy pod and replace it with new, healthy ones.

  31. Why are readiness probes important?

    A readiness probe ensures that clients only talk to those healthy pods and never notice of any issues with the system.

    Imagine that a group of pod's (for example, pods running application servers), depends on a service provided by another pod (a backend database, for example). If at any point one of the foreground pods experiences any connectivity issue with the database, it may be wise for its readiness probe to signal to Kubernetes that the pod is not ready to serve any requests at that time. If the other pod's are not experiencing the same issues, they will continue to serve.

  32. Are readiness probes recommended to configure always?

    Yes. It is recommended to configure readiness probes as simple as sending HTTP requests to the base URL's.

  33. Where you will need to use headless services?

    1. The client needs to connect to all the pods. 2) Each pod needs to connect to all the other pods.
  34. What is the difference between a regular service and a headless service?

    The regular service returns the CLUSTER IP such that the client will eventually connect to a single pod whereas a headless service will return multiple IPs. Configuration wise, the only addition to the headless services is setting the ClusterIP field to none.

  35. What volume types are available in kubernetes?

    emptyDir - A simple empty directory used for storing transient data. Created on the node where a pod is created on and gets deleted when the pod is deleted. All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container.

    hostPath - used for mounting directories from the worker node's filesystem into the pod.

    NFS - An NFS share mounted into the pod.

    Cloud Storage: gcePersistentDisk (Google), awsElasticBlockStore (AWS),azureDisk (MicroSoft)

    configMap,secret,downwardAPI : Special types of volumes used to expose certain Kubernetes resources and cluster information to the pod.

    persistentVolumeClaim : A way to use pre-provisioned or dynamically provisioned persistent storage.

  36. What are sidecar containers?

    A sidecar container is a container that augments the operation of the main container of the pod.

    Use case:

    A static website whose contents are stored in GitHub repo. And, everytime the pod is started, it should clone the repo and start the webserver to serve the contents. instead of having both the logic together in a single container, have an init/sidecar container which does the cloning part in a volume shared with the web server container.

  37. What option do you set for persistentReclaimPolicy?

    The available options are, Retain, Recycle, and Delete.

    The scenario where it comes into the picture is while deleting a persistent volume claim. If you want to retain the underlying volume of the claim, you should set it to "Retain", to reuse it or make it available for a different claim, set it to "Recycle" and if you want to delete the volume itself, set it to "Delete".

    With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.

  38. What access modes are available for a persistent volume?

    RWO - ReadWriteOnce - Only a single node can mount the volume for reading and writing. ROX - ReadOnlyMany - Multiple nodes can mount the volume for reading. RWX - ReadWriteMany - Multiple nodes can mount

    If the use case is such that you want to retain the contents of a persistent volume after deleting the persistent volume claim which is using it, set it to "Retain", else if you want to....

    minikube start --extra-config=apiserver.cloud-provider=aws --extra-config=controller-manager.cloud-provider=aws

    Docker

  39. What is the difference between the below Dockerfile commands?

    • ENTRYPOINT node app.js
    • ENTRYPOINT ["node", "app.js"]

    The first one runs the given command inside a shell only, while the second one runs it directly. The first type of running the command is called shell form and the second one is called exec form.

    The shell process is unnecessary, hence using the second command is recommended.

  40. In Kubernetes, how do you override the given ENTRYPOINT & CMD of the container/image?

    Set the properties command and args in the container specification to override ENTRYPOINT and CMD respectively.

    The command and args field can not be updated after the pod is created.

  41. What is the use of configMaps?

    It is to decouple the configuration information from the pod definition, so that the same pod definition could be shared between different environments, for example, ofcourse with an environment specific configuration defined in configMaps for each environment involved.

  42. What will happen if the referenced configMap in the pod does not exist while creating it?

    The container referencing the non-existing configMap will fail to start, but the other containers, if any, will start normally. If you then create the missing configMap, the failed container is started without requiring you to recreate the pod.

  43. What happens if the given key in the configMap is invalid (for example, "CONFIG_FOO-BAR", having "-" as part of the key name is not accepted as a valid)?

    It skips the key but records an event informing it skipped it.

  44. How do you ensure that Kubernetes pull only the latest updated image for pod's?

    1. Set "imagePullPolicy" property to "Always". 2) Change the tag everytime the image is updated. 3) Tag the image "latest".
  45. What strategies are available to follow for Deployments?

    1. Rolling update 2) Recreate
  46. What's the use case for Recreate Strategy?

    When your application does not support running multiple versions in parallel and requires the old version to be stopped completely before the new one is started.

  47. What's the use case for RollingUpdate strategy for deployments?

    Use it to ensure the availability and maintain the same performance level of the application, provided your application supports both the old and the new versions running in parallel.

  48. Is it possible to update the existing image in a pod?

    Yes, use the below command to modify a container of any resource types, such as, replication set, deployments and so on.....

    kubectl set image <RESOURCE_TYPE> pod_name container_name=image_new_version

    Example:

    kubectl set image deployment kubia nodejs=luksa/kubia:v2 
    

    If it is a Deployment object, it will trigger rollouts, and the existing pods with the older version will be replaced with the new ones one by one.

  49. Is there a way to control the speed of rollouts?

    Set "minReadySeconds" to the Deployment object. It is used to slow down the rollouts so that you can prevent deploying malfunctioning versions by pausing or blocking the progression of rollouts.

  50. What kinds of applications a ReplicationSet can not support?

    The multiple replicas of a pod in a ReplicaSet share the same volumes only. A replicaSet will not support an application whose pods need to have their own volumes. For such requirements, StatfulSets are used.

  51. Why does StatefulSets scale down only one pod at a time?

    Certain stateful applications do not handle rapid scale-down nicely. For example, if a distributed data store is configured to store 2 copies of data, in cases where those 2 nodes go down at the same time, a new data just entered that is not written yet into any of those replicas will be lost.

  52. Why StatefulSets do not allow scale down operations if any of the existing instances is unhealthy? For the same reason as above.

  53. What will the scheduler check while scheduling a pod which has resource requests?

    It will decide whether to schedule a pod on a particular node based on the sum of resources requested by all the deployed pods on the node, not the current resource usage.

  54. What will be the values of "resource requests", when you set only "resource limits" for containers in a pod?

    The resource requests will be set to the same values as the given "resource limits".

  55. What happens when a process running in a container/pod exceeds the memory limit set?

    Kubernetes will kill the pod, and restart it if the restart policy is set to Always or OnFailure. It restarts it every time when the condition recurs. After a few more such re-starts, Kubernetes will be restarting it with increased delays between restarts.

    ++stateful set++stateful pod++stateful pod not moving to another node++ failover++stateful pod failover++ Node failure++

  56. Why is a Stateful set pod not shifting/failing-over to another node, when the current node where it is fails?

    This is by design. When a node goes "down", the master does not know whether it was a safe down (deliberate shutdown) or a network partition.
    Thus PVC with that node remains on the same node and master mark the pods on that node as Unknown

    By default, Kubernetes always tries to create the pod on the same node where PVC is provisioned, which is the reason the pod always comes up on the same node when deleted.

    This PVC goes onto another node only when you cordon the node, drain the node and delete the node from the cluster. Now the master knows this node doesn't exist in the cluster. Hence master moves PVC to another node and pod comes up on that node.

            ++deleted node++deleted node back++getting deleted node back++ 
    
  57. Is it possible to get the deleted node back?

    The command - “kubectl uncordon <node_name>” shows even the deleted node before.

         ##important##must-know##statefulset failover++ 
    
  58. Why are StatefulSet Pods not failing over to another node and waiting indefinitely?

    Excerpts from - https://mgarod.medium.com/the-curious-case-of-failing-over-in-kubernetes-fcd16bc9a94d

    After a node becomes Unknown or NotReady and pods are ready to be evicted, the statefulset pods too will become Unknown … and they will stay that way “forever” (i.e. until the node is able to re-establish communication with the master). For example, when datastore-statefulset-1 enters an Unknown state, the master will not spin up a replacement as it would a deployment. Remember that the pod and node cannot communicate with the master, and statefulsets have a guarantee that there will be at most one pod per index . The Kubernetes master cannot verify the existence of datastore-statefulset-1, therefore it does not have enough information to be certain that spinning up a new pod would not violate the “at most one” guarantee. This is not a bug, it is a feature that the designers of Kubernetes built to protect the integrity of state and associated stateful resources like hard disks.

    If you are certain that your stateful resources will not be affected by a statefulset pod being replaced like a deployment pod, there is a modification that you can make to have it act as such. Simply set the terminationGracePeriod of the statefulset pod to 0. With this change, the Kubernetes master is now ensured that the statefulset pod will be forcefully killed when connection is re-established, and therefore it need not await any kind of status check regarding the pod. Now if the statefulset pod is rescheduled while the node is partitioned and the previous pod is in Unknown state, the master would not be violating “at most one” guarantee.But remember that those partitioned pods may still be running! From the official documentation regarding nodes:

    In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.

  59. Where Kubernetes keeps track of everything relating to Pods?

    It has everything related to a Pod at the below directories/files on the respective host where a Pod is running on,

    main path: /var/lib/kubelet/pods/<pod_id>

    drwxr-x--- 5 root root 4096 Apr 8 06:04 containers

    -rw-r--r-- 1 root root 250 Apr 8 06:04 etc-hosts

    drwxr-x--- 3 root root 4096 Apr 8 03:14 plugins

    drwxr-x--- 3 root root 4096 Apr 8 06:04 volume-subpaths

    drwxr-x--- 7 root root 4096 Apr 8 03:14 volumes

  60. How do you see the contents of emptyDir volume of a Pod on the node?

    -- Get PodID

    kubectl get pods -n <namespace> <pod-name> -o jsonpath='{.metadata.uid}'
    

    e.g:

    kubectl get pod jenkins-0 -o jsonpath='{.metadata.uid}'

    -- The path where ALL of a Pod's volumes (Well, except "hostPath" volume type) are mounted on to the host:

    Path: /var/lib/kubelet/pods/<pod_id>/volumes
    
        kubernetes.io~configmap
        kubernetes.io~empty-dir
        kubernetes.io~nfs
        kubernetes.io~projected
        kubernetes.io~secret
    
  61. What will happen to those running pods when you drain node after cordoning it?

    -- When you try to drain a node, you will get a warning like the below,

     kubectl drain k8s-node2
    
     node/k8s-node2 already cordoned
     error: unable to drain node "k8s-node2", aborting command...
    
     There are pending nodes to be drained:
      k8s-node2
     cannot delete Pods with local storage (use --delete-emptydir-data to override): jenkins/jenkins-0, 
    
     jenkins/sonarqube-postgresql-0, jenkins/sonarqube-sonarqube-b5fc958c-bpd4k
     cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-2mtlk, kube-system/kube-proxy-w2m72  
    
    

    -- if you still want to proceed besides the above warning:

    kubectl drain k8s-node2  --delete-emptydir-data --ignore-daemonsets
    

    The node is drained and pods are switched over to a different node now.

  62. What is the difference between executing "cordon" & "drain" on a node?

    "cordon": Prevents new pods getting scheduled on to the node.

    "drain": Evicts ALL those running Pods & creates them back (except the standalone pods & one created by Daemon sets) on other available nodes.

  63. How do you put a node back into service which was cordoned and drained before?

    The current status of a node, post cordoning and draining:

     NAME         STATUS                     ROLES                  AGE     VERSION
     k8s-node2    Ready,SchedulingDisabled   <none>                 7d15h   v1.23.5
    
    kubectl uncordon node k8s-node2
    

    The current status of the node:

    kubectl get node k8s-node2
    
    NAME        STATUS   ROLES    AGE     VERSION
    k8s-node2   Ready    <none>   7d16h   v1.23.5
    
  64. When will you have to drain & evict pods on a node?

    Before shutting down a node for maintenance or for purposes such as upgrade, it is necessary to evict the Pods running on the node safely. The ‘kubectl drain’ command comes handy during this situation.

    Check for more info on cordon/drain: https://networkandcode.wordpress.com/2019/10/10/kubernetes-nodes-drain-or-cordon-nodes/

  65. The scenario is such that you do not have the manifest file of a deployment available (say, a restart of a mongodb pod/deployment is required) so you can not delete the existing and re-create the deployment. How do you handle it?

    kubectl rollout restart deployment/mongo
    

    This command deletes the existing deployment and creates a new one.

    kubectl rollout status deployment/mongo
    Waiting for deployment "mongo" rollout to finish: 1 old replicas are pending termination...
    Waiting for deployment "mongo" rollout to finish: 1 old replicas are pending termination...
    deployment "mongo" successfully rolled out
    
  66. Can you specify your own IP address to a service?

    You can specify your own cluster IP address as part of a Service creation request. To do this, set the .spec.clusterIP field. For example, if you already have an existing DNS entry that you wish to reuse, or legacy systems that are configured for a specific IP address and difficult to re-configure.

    The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range that is configured for the API server. If you try to create a Service with an invalid clusterIP address value, the API server will return a 422 HTTP status code to indicate that there's a problem.

  67. What will happen if you try to create a PV pointed to a non-existing path (in hostPath)?

    It does NOT throw any errors but the PV gets created. But, at the time of creating pods in the PV/C only, it will throw the error, for example, in case if it is not able to create the path, like the below:

    "Error: failed to generate container "1285929e2fa2f3f99054364fd2961d95aac35c66ce01433c6fed1e6b e8302920" spec: failed to generate spec: failed to mkdir "/data/mongo": mkdir /data: read-only file system"

  68. What does the Pod's status "Completed" say?

    It means, the container(s) in the pod are successfully completed and there are no more process to complete.

    e.g.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: busybox-deployment
    labels:
      app: busybox
    spec:
      replicas: 1
      strategy:
        type: RollingUpdate
      selector:
        matchLabels:
          app: busybox
      template:
        metadata:
          labels:
            app: busybox
        spec:
          containers:
          - name: busybox
            image: busybox
            imagePullPolicy: IfNotPresent
            command: ['sh', '-c', 'echo Container 1 is Running ; sleep 10']   
    
    

    The above Pod is showed as "Completed" when the busybox container in it has executed the given commands (echo & sleep).

    NAME                                      READY   STATUS      RESTARTS   AGE
    busybox-deployment-757bdd75f5-mv4qh       0/1     Completed   0          13s
    
  69. Would it be possible to update a Pod's spec with out re-creating?

    What is the Patch command used for? And, how?

    Use kubectl patch to update an API object in place.

    "Patch" command can be used to update an existing Pod's spec or update an existing service from ClusterIP to LoadBalancer , for example.

    Note that these could be accomplished by editing the Pod/Service also.

    ** Update Service Type:**

    kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    

    ** Add another container to an exiting Pod **

    Deployment.yml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: patch-demo
    spec:
      replicas: 2
    selector:
      matchLabels:
        app: nginx
    template:
      metadata:
        labels:
          app: nginx
      spec:
        containers:
        - name: patch-demo-ctr
          image: nginx
    

    ** patch-file.yaml:**

    spec:
    template:
      spec:
        containers:
        - name: patch-demo-ctr-2
          image: redis
    

    Patch the deployment:

    kubectl patch deployment patch-demo --patch-file patch-file.yaml
    

    Pods Before Patch (only a single container shows up under READY column) :

    NAME                        READY     STATUS    RESTARTS   AGE
    patch-demo-28633765-670qr   1/1       Running   0          23s
    patch-demo-28633765-j5qs3   1/1       Running   0          23s
    

    Pods After Patch( 2 containers show up under READY column now):

    NAME                          READY     STATUS    RESTARTS   AGE
    patch-demo-1081991389-2wrn5   2/2       Running   0          1m
    patch-demo-1081991389-jmg7b   2/2       Running   0          1m
    
  70. How do you debug/fix an issue with a Pod which is getting restarted by the Liveness probe configured with it?

    For debugging, we can increase the Liveness check initialization time or remove the check for some time and see what the problem is by going through the pod logs.

  71. How to arrive at what values to set for Resource Limits/Requests?

    To get idea of the behavior of container in terms of memory /cpu usage/limit, this solely depends on the application type, load its handling, heap memory it uses etc. After observations on the fluctuation by load testing and performance analysis, the limits & requests has to be set.

  72. What is the difference between Liveness probe & Readiness Probe in terms of their responsive/corrective action when they get failure response from the configured probe?

    The readiness probe will take out the Pod from service while the Liveness probe will try to fix the issue by restarting the container/pod.

  73. When do you get to see "CrashLoopBackOff" as a status of a Pod?

    When a Pod restarts a container (for example, a container with Alpine unix image without any daemon/command running inside) multiple-times and still the container keeps stopping/failing, the status of the pod will be set as "CrashLoopBackOff".

  74. What the status of Pod "ImagePullBackoff" says?

    It means, Kubernetes is not able to pull the image, and the issue is nothing but something to do with the given image only.

  75. What is the order you following while creating objects for a deployment?

    1. ConfigMap 2) Secrets 3) Services 4) Deployment
  76. A Pod is using deriving value from a configMap object. When the variable's value is changed in the configMap, will it affect the value in the running pod?

    No, it does not change the value of the variable referred in the Pod.

  77. Is having Labels mandatory for all the objects? No, it is not mandatory, but specifying one is a good practice.

  78. Are ServiceAccounts namespaced object?

    Service accounts are namespace resources meaning they can only be used within one Kubernetes namespace and name of service accounts must be unique within the namespace but not across different namespaces.

  79. When will you need to create ServiceAccounts?

    When you want to assign additional permissions to a Pod to perform certain operations which it can not do with the "default" ServiceAccount, you will create a new serviceAccount with the required permissions granted via roles (roles or/and ClusterRoles whatever is required) and assign the Pod to it.

  80. Why or where do you need to use ResourceQuota?

    When a cluster is shared between different teams, for e.g. application team and database team, there will be a need to give them ResourceQuota so that one team can not over-use the resources and deprive the other team of the same. The resourceQuota object is used to set Resource requests/limits at the namespace level that these teams use.

  81. A LimitRange with minimum.cpu of "200m" & maximum.cpu="500m" is enforced (along with memory) in a namespace. What will happen if you try to create a Pod whose requests.cpu is "100m"?

    This pod actually tries to request cpu values lesser than the set value in the enforced limiterange. In this case, the purpose of limitRange is to make sure that every pod requests atleast 200m CPU of memory. Here we got the below error:

    Error from server (Forbidden): error when creating "new1.yaml": pods "nginx" is forbidden: [minimum memory usage per Container is 250Mi, but request is 100Mi, minimum cpu usage per Container is 200m, but request is 100m]

  82. What is the use of KUBE-PUBLIC namespace?

    Anybody could create objects in this namespace, regardless whether they have authentication/authorization in the cluster.

    ++jenkins++jenkins github++github jenkins++

  83. How Jenkins handles the GIT repo at its workspace?

    Jenkins finds the latest commitID of the branch it is being executed against by using,

    git rev-parse origin/main^{commit}
    

    In the above, it is getting the commitID of "main" branch.

    Next, it checks out to that CommitID it found in the previous step,

    git checkout -f <CommitID>
    
  84. Is liveness Probe Needed? When Should You Use a Liveness Probe?

    A liveness probe is not necessary if the application running on a container is configured to automatically crash the container when a problem or error occurs. In this case, the kubelet will take the appropriate action—it will restart the container based on the pod's restartPolicy.

  85. What Deployment Strategy that Helm will follow while upgrading a release? Rolling Update or Recreate?

    It uses the strategy defined in the deployment manifest file. Technically the update strategy defined in the deployment manifest is applied every time the PodSpec changes, no matter whether it changes through helm or kubectl or something else. And only if the PodSpec changes.

  86. What will happen to those files when the path where a ConfigMap is mounted on has some files?

    When a ConfigMap is mounted as a volume onto the given mount path of the pod, the existing files in the path of the Pod, if any, are deleted. This can be said as an issue faced in the real-time while using ConfigMap this way and the corrective action is to use “Subpath”.

  87. What is the necessity of using the "Subpath" while mounting a ConfiMap Volume?

    When a ConfigMap is mounted as a volume onto the given mount path of the pod, the existing files in the path of the Pod, if any, are deleted. This can be said as an issue faced in the real-time while using ConfigMap this way and the corrective action is to use “Subpath”.

  88. Is it possible to configure (the Source Code Managment part of) Jenkins to check and build multiple branches if there are any changes detected with them? And, how Jenkins handles it?

    Yes, it is possible. Specify all the branches that need to be built under the "Branch Specifier" in a FreeStyle Jenkins job. When the job gets triggered (manually or by other means) and if it finds changes with multiple branches, it will create multiple builds as required, one for each branch. For example, if it detects changes in 2 different branches, there will be 2 separate builds created.

    There will be a headless branch created every time in the workspace of the Jenkins for each build of the job which would contain the latest changes. In this scenario, if the same job is triggered once again where there are no changes done to the repo (post the changes to the 2 branches in the said example), the headless branch will remain unchanged and will be same as the the last build (whichever build was triggered last between the 2).

  89. How does Jenkins handle the GIT repositories at its workspace?

    In order to determine whether Jenkins needs to update/build a branch which is specified in a job, it needs to get the current CommitID of the branch and compare it with the CommitID of the previous build. If there is any change, it will have to sync it with the repository and build it. In case of no change, it will still be pointing to the same branch as the previous build and you will find no difference with the current build.

    i) Jenkins looks to be getting the latest commitID of a branch it is executing the job against by using,
    
    git rev-parse origin/main^{commit}
    

    The above command could be found in the Jenkins job’s log in case of a specific Branch Name only given for "Branches to build". In this case, it was given as "main". In case multiple branches have been specified for this parameter, the above command does not seem to be logged with the log file.

    ii) It looks to be getting the CommitID of the previous build from "build.xml" @ $JENKINS_HOME/jobs//builds/<LAST-BUILD#>.

    B) Next, it checks out to that latest CommitID found in the previous step ,

    git checkout -f <CommitID>
    

    If you execute “git branch -a” from the workspace of the job, you would see a detached head pointing to the CommitID found before....

    >>  git branch -a
    * (HEAD detached at c32cc89)
    remotes/origin/develop
    remotes/origin/gh-pages
    remotes/origin/main
    remotes/origin/wavefront
    
  90. What will happen when you build a job the first time whose "Branches to build" is configured with multiple branches?

    It will create a build for each branch, as there is no way to check whether those branches are updated. From the next time, it will build only the updated branches when the job gets triggered.

  91. Where/when would you use "Rebase" instead of "Merge"?

    Say, you have your own feature branch where you've been updating the code. In the meantime, your team members have added/updated their changes to, say, "Develop" branch. If you do want to preserve/retain the commit history of "Develop" branch on your feature branch too, you will go for "rebase". Your changes will be applied on top of the changes in the "Develop" branch. You will see the commit history of Develop first and then yours on your branch now.

    Instead of Rebase, if you do "merge", the changes are applied in the same order they taken place in those different branches. Hence, the commit history of the branches will get mixed up.

⚠️ **GitHub.com Fallback** ⚠️