kubernetes pods - ghdrako/doc_snipets GitHub Wiki

  • Labels let you group Pods and associate them with other objects in powerful ways.
  • Annotations let you add experimental features and integrations with 3rd-party tools and services.
  • Probes let you test the health and status of Pods and the apps they run. This enables advanced scheduling, updates, and more.
  • Affinity and anti-affinity rules give you control over where in a cluster Pods are allowed to run.
  • Termination control lets you gracefully terminate Pods and the applications they run.
  • Security policies let you enforce security features. Resource requests and limits let you specify minimum and maximum values for things like CPU, memory, and disk IO.

Pods assist in scheduling

Every container in a Pod is guaranteed to be scheduled to the same worker node. Labels, affinity and anti-affinity rules, as well as resource requests and limits give you fine-grained control over which worker nodes Pods can run on.

Pods enable resource sharing

Pods provide a shared execution environment for one or more containers. This shared execution environment includes things such as:

  • Shared filesystem
  • Shared network stack (IP address, routing table, ports…)
  • Shared memory
  • Shared volumes This means that if a Pod has two containers, both will share the Pod’s IP address and can access any of the Pod’s volumes to share data.

Static Pods vs controllers

There are two ways to deploy Pods:

  1. Directly via a Pod manifest
  2. Indirectly via a controller

Static Pods are only monitored and managed by the worker node’s kubelet process which is limited to attempting restarts on the local worker node. If the worker node they’re running on fails, there’s no control-plane process watching and capable of starting a new one on a different node.

Pods deployed via controllers have all the benefits of being monitored and managed by a highly-available controller running on the control-plane. The local kubelet can still attempt local restarts, but if restart attempts fail, or the node itself fails, the observing controller can start a replacement Pod on a different worker node.

Deploying Pods

The process of deploying a Pod to Kubernetes is as follows:

  1. Define it in a YAML manifest file
  2. Post the YAML to the API server
  3. The API server authenticates and authorizes the request
  4. The configuration (YAML) is validated
  5. The scheduler deploys the Pod to a healthy worker node with enough available resources
  6. The local kubelet monitors it

If the Pod is deployed via a controller, the configuration will be added to the cluster store as part of overall desired state and a controller will monitor it.

The anatomy of a Pod

Pod is an execution environment shared by one or more containers:

  • net namespace: IP address, port range, routing table…
  • pid namespace: isolated process tree
  • mnt namespace: filesystems and volumes…
  • UTS namespace: Hostname
  • IPC namespace: Unix domain sockets and shared memory

Pods and shared networking

Every Pod has its own network namespace. This means every Pod has its own IP address, its own range of TCP and UDP ports, and its own routing table. If it’s a single-container Pod, the container has full access to the IP, port range and routing table. If it’s a multi- container Pod, all containers share the IP, port range and routing table.

Container-to-container communication within the same Pod happens via the Pod’s localhost adapter and a port number.

The pod network

Pod gets its own unique IP address that’s fully routable on an internal Kubernetes network called the pod network. This is a flat overlay network that allows every Pod to talk directly to every other Pod even if the worker nodes are all on different underlay networks.

In a default out-of-the-box cluster, the pod network is wide open from a security perspective. You should use Kubernetes Network Policies to lock down access. Pods are immutable objects. This means you can’t modify them after they’re deployed.

Atomic deployment of Pods

Pod deployment is an atomic operation. This means it’s all-or-nothing – deployment either succeeds or it fails. You’ll never have a scenario where a partially deployed Pod is servicing requests. Only after all a Pod’s containers and resources are running and ready will it start servicing requests.

Pod lifecycle

  1. Define it in a declarative YAML object.
  2. This gets posted to the API server and the Pod enters the pending phase.
  3. It’s then scheduled to a healthy worker node with enough resources and the local kubelet instructs the container runtime to pull all required images and start all containers.
  4. Once all containers are pulled and running, the Pod enters the running phase.
  5. If it’s a short-lived Pod, as soon as all containers terminate successfully the Pod enters the_ succeeded state_.
  6. If it’s a long-lived Pod, it remains indefinitely in the running phase.

Shorted-lived and long-lived Pods

Pods can run all different types of applications. Some, such as web servers, are intended to be long-lived and should remain in the running phase indefinitely. If any containers in a long-lived Pod fail, the local kubelet may attempt to restart them. We say the kubelet “may” attempt to restart them. This is based on the container’s restart policy which is defined in the Pod config. Options include_ Always, OnFailure, and Never_. Always is the default restart policy and appropriate for most long-lived Pods. Other workload types, such as batch jobs, are designed to be short-lived and only run until a task completes. Once all containers in a short-lived Pod successfully terminate, the Pod terminates and its status is set to successful. Appropriate container restart policies for short-lived Pods will usually be Never or OnFailure. Kubernetes has several controllers for different types of long-lived and short-lived workloads.

  • Deployments, StatefulSets, and DaemonSets are examples of controllers designed for long-lived Pods.
  • Jobs and CronJobs are examples designed for short-lived Pods.

Pod immutability

Pods are immutable objects. This means you can’t modify them after they’re deployed.

Multi-container Pods

Kubernetes offers several well-defined multi-container Pod patterns:

  • Sidecar pattern
  • Adapter pattern
  • Ambassador pattern
  • Init pattern

Pod manifest files

Straight away you can see four top-level resources:

  • kind

  • apiVersion

  • metadata

  • spec

  • .kind field tells Kubernetes the type of object being defined

  • apiVersion tells Kubernetes the schema version to use when creating resource. The normal format for apiVersion is <api-group>/<version>. Pods are in the core API group which omits the API group name, so we describe them in YAML files as just v1. StorageClass objects are defined in the v1 schema of the storage.k8s.io API group and are described in YAML files as storage.k8s.io/v1.

  • .metadata section is where you attach things such as names, labels, annotations,and a Namespace. Names help you identify the object in the cluster, and labels let you create loose couplings with other objects. Annotations can help integrate with 3rd-party tools and services.

  • .spec section is where you define any containers in the Pod.If this was a multi-container Pod, you’d define additional containers in the .spec section.

⚠️ **GitHub.com Fallback** ⚠️