Kubernetes - keshavbaweja-git/guides GitHub Wiki

Cluster

# View cluster info
kubectl cluster-info

# View nodes in a cluster
kubectl get nodes -o wide

Context

# View current context
kubectl config current-context

# Use a context
kubectl config use <context>

A Context comprises

  • Cluster
  • Namespace
  • User

Pod management

# List pods
kubectl get pods

# List pods not in Completed status
kubectl get pods --field-selector=status.phase!=Succeeded

# List pods in Completed status
kubectl get pods --field-selector=status.phase==Succeeded

# List node name for a pod
kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName

# View Logs
kubectl logs --follow <pod-name> <container-name>

# List environment variables in a pod with one container
kubectl exec <pod-name> env

# Start bash session inside a single container in a pod
kubectl exec -ti <pod-name> bash

# Launch a shell inside a pod
kubectl exec <pod-name> -c <container-name> -ti -- <command>
kubectl exec <pod-name> -c <container-name> -ti -- sh

# Force delete a pod
kubectl delete pods <pod_name> --grace-period=0 --force -n <namespace>

# List all container images in a pod
# https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/

# Forward port
kubectl port-forward <pod> <portHost>:<portContainer>

Scale

# Scale a Deployment
kubectl scale deployment <deployment-name> --replicas <number>

# Scale a ReplicaSet
kubectl scale rs <rs-name> --replicas <number>

Rolling restart of a deployment, ds

kubectl rollout restart deployment <deployment-name>
k -n <ns-name> rollout restart ds <ds-name>
k -n <ns-name> rollout status ds <ds-name>

Resource management

Description Command
Create resources described in manifest file/folder kubectl apply -f <folder> --recursive
Delete resources described in manifest file/files in a folder kubectl delete -f <folder> --recursive
Export a Deployment as yaml kubectl get deployment <deployment_name> -o yaml --export
Delete all resources in a namespace kubectl delete all --all -n <namespace>

Role

apiVersion: authorization.openshift.io/v1
kind: Role
metadata:
  namespace: <namespace>
  name: <role-name>
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]

Create resources imperatively

# Create a Deployment
kubectl create deployment <deployment-name> \ 
--image=<image-name> \
--replicas=3  

# Create a Pod
kubectl run <pod-name> \
--generator=run-pod/v1 \ 
--image=<image-name> \
--replicas=3 \
--labels app=frontend \
--expose \
--port=<port-number>  

# Create a Replica Set
kubectl run <rs-name> \ 
--generator=run/v1 \
--image=<image-name> \
--replicas=3 \
--labels app=frontend \ 
--expose \
--port=<port-number> 

# Create a Service
kubectl expose pod \
<pod-name> \
--name <service-name> \
--port <container-port> \
--dry-run
-o yaml

kubectl expose pod \
redis \
--name redis-service \
--port=6379 \
--dry-run \ 
-o yaml

kubectl create service \
<service-type> \ 
// clusterip, nodeport, loadbalancer, externalname
<service-name> \
--tcp <service-port:container-port> \
--dry-run \
-o yaml 

kubectl create service \ 
clusterip \ 
my-clusterip-service \
--tcp=6379:6379 \
--dry-run \
-o yaml

Rollout

# Rollout history
kubectl rollout history \
deployment <deployment-name>

# Rollout status
kubectl rollout status \
deployment <deployment-name>

# Rollout undo
kubectl rollout undo \
deployment <deployment-name>

Set image

kubectl set image \
deployment <deployment-name> \
<container-name>=<image-name>

Configmap

kubectl create configmap <configmap-name> --from-env-file <path-to-env-file>

kubectl create configmap <configmap-name> --from-env-file <path-to-env-file> -o yaml --dry-run | kubectl replace -f -

Pod

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

# POD API
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/

Commands

k get nodes -o json | jq -r '[.items[] | {name: .metadata.name}] | .[].name'

k get nodes -o jsonpath='{.items[*].metadata.name}' | tr  " " "\n"

k get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

k get pods -o json | jq '.items[] | [{name: .metadata.name, images: [.spec.containers[].image]}]'

k exec -ti mybusybox-578566d4df-lx4bt -- wget -O- http://mynginx.fargate1:80

k create deploy mybusybox --image busybox $do -- sleep 1d | sed 's/commad/args/g' | k apply -f -

export oj=-o json
k get nodes $oj | jq '.items[] | {name: .metadata.name, taints: .spec.taints}'

k api-resources --api-group=appmesh.k8s.aws

Concepts

  • Pods are fungible resources.
  • Each Pod gets an IP address, however applications can't rely on this IP address for routing. This IP address changes with Pod restarts.
  • A Service is an abstraction that defines a logical set of Pods and a policy by which to access them.
  • Set of Pods targeted by a Service is determined by a "Selector" defined in Service spec.
  • Every node in a Kubernetes cluster runs a kube-proxy. A kube-proxy is responsible for implementing a form of Virtual IP for services of type other than ExternalName.
  • Why Kubernetes cluster does not use round robin DNS routing?
    • DNS implementations have a history of not respecting TTL on DNS records.
    • Applications can look up DNS records initially and cache them for all subsequent requests.
    • Low or zero TTL on DNS records could impose high load on DNS server, impacting performance.
  • When a Pod is run on a node, the kubelet adds a set of environment variables for each active Service.

Volumes

  • In simplest terms, a Volume is a directory wiht possibly some data inside that is available to a Container at a mount point.
  • A Kubernetes Volume has an explicit lifetime - it is tied to the lifetime of the Pod that created it.
  • In Pod specification,
    • specify a Volume at .spec.volumes
    • mount a Volume in a Container specification at spec.containers.volumeMounts

Components

Component Description
kube-apiserver A REST server that coordinates cluster management
ETCD cluster Distributed Key-Value store for cluster information
kube-scheduler Schedules pods for placement on nodes
Controller Manager Node Controller, Replication Controller
kubelet Agent that runs on each Worker node
kube-proxy Sets up network routing on nodes, using IP Tables

Ingress Resource

  • Provide externally reachable URLs for services deployed in cluster
  • Load balancing
  • TLS termination
  • Routing (path based)
  • Name based virtual hosting

Ingress Rules

  • Host (optional)
  • List of paths
  • Backend - Service Name + Port

Links

Misc

k -n quorum get secrets -o name \
| cut -f 2 -d '/' \
| grep goquorum-node \
| xargs -L 1 kubectl -n quorum delete secret "{}"
⚠️ **GitHub.com Fallback** ⚠️