kubernetes - ghdrako/doc_snipets GitHub Wiki
- https://github.com/kubernetes
- https://kubernetes.io/docs/home/
- https://github.com/learnk8s/free-kubernetes
- https://kubernetes.io/blog/
- https://www.youtube.com/watch?v=HlAXp0-M6SY&t=718s - prezentacja z Tetris
Master node:
- API server: Exposes the Kubernetes API. It is the frontend of the control plane. In example kubectl transform commands to API request to kube apiserver and convert respons to kubectl response
- Controller manager: Multiple controllers are responsible for the overall health of the cluster. in Example Deployment Controler maintain the pods desire state within a cluster.
- etcd: A database that hosts the cluster state information.
- Scheduler: Responsible for placing the Pods across the nodes to balance resource consumption Worker node:
- kubelet: This reads the Pod specification and makes sure that the right containers run in the Pods. It interacts directly with the master node.
- kube-proxy: This is a network proxy running on each node. It enables the usage of services (we will learn about services shortly).
- Container runtime: This is responsible for running containers.
Kubernetes objects:
- Pods
- ReplicaSets
- Replication controllers
- Deployments
- Namespaces
Types of services
- ClusterIP
- NodePort
- LoadBalancer
- ExternalName
Object
- name - All objects are identified by name - must be unique in kubernetes namespace
- uid - All objects are assigned a unique identifier (UID) by Kubernetes - unique in through all life of cluster
- labels - help identify,organise objects or subset of objects in example
...
labels:
app: nginx
env: test
stack: frontend
...
kubectl get pods --selector=app=nginx
Pods are ephemeral
Pod phases:
- Pending - when images are being pulled from the repository, the pod will be in the pending phase.
- Running - container are creating
- Succeeded - terminate sucesfull and not be restarted
- Failed - container terminated with fail
- Unknown - pod canno't be retrive - example comunication error between master and kubelet
- CrashLoopBackOff - means that one of the containers in the pod exited unexpectedly, even after it was restarted at least once. Usually, CrashLoopBackOff means that the pod isn't configured correctly.
- ImagePullBackOff - problem with download image. The image or tag doesn’t exist or repository required authentication.
Constainer state in the Pod:
- waiting
- running
- terminating
apiVersion: apps/v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Use any of the following methods to choose where Kubernetes schedules specific Pods:
- nodeSelector field matching against
- node labels
- Affinity and anti-affinity
- nodeName field
As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.
In general a Pod has the following DNS resolution:
pod-ip-address.my-namespace.pod.cluster-domain.example.
For example, if a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for your cluster is cluster.local, then the Pod has a DNS name:
172-17-0-3.default.pod.cluster.local.
Any Pods exposed by a Service have the following DNS resolution available:
pod-ip-address.service-name.my-namespace.svc.cluster-domain.example.
Kazdy pod ma skonfigurowanego DNS-a w oparciu o resolv.conf
/etc/resolve.conf
Pody maja przypisany adres DNS - adres IP oddzielony myslnikami
curl http://<ip oddzielone ->.<namespace>.<usluga np pod svg>.<nazwa klastra>
curl http://10-1-1-19.default.pod.cluster.local:8888
- Deployment
- StatefulSet
- DeamonSet
- Job
- Roll out updates to the Pods - in old replica set close pods and in new open newer
- Roll back Pods to the previous revision
- Scale or autoscale Pods
- well-suited for stateless application
- States
- progressing states
- complete state
- failed state
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kubectl scale --replicas=0 deployments/conference-frontend-deployment
Deployments support updating images to a new version through a rolling update mechanism. When a Deployment is updated with a new version, it creates a new ReplicaSet and slowly increases the number of replicas in the new ReplicaSet as it decreases the replicas in the old ReplicaSet.
kubectl edit deployment hello # change image version to trigger rolling update
kubectl get replicaset
kubectl rollout history deployment/hello
kubectl rollout pause deployment/hello # Pause a rolling update
kubectl rollout status deployment/hello # current state of the rollout
kubectl rollout resume deployment/hello # resume rolling updates
kubectl rollout status deployment/hello
kubectl rollout undo deployment/hello # roll back to the previous version
kubectl rollout history deployment/hello # Verify the roll back in the history
kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}{"\t"}{.spec.containers[0].image}{"\n"}{end}' # verify that all the Pods have rolled back to their previous versions
When you want to test a new deployment in production with a subset of your users, use a canary deployment. Canary deployments allow you to release a change to a small subset of your users to mitigate risk associated with new releases.
Create a canary deployment
A canary deployment consists of a separate deployment with your new version and a service that targets both your normal, stable deployment as well as your canary deployment.
kubectl create -f deployments/hello-canary.yaml # create canary deployment with new version of image
kubectl get deployments
On the hello service, the selector uses the app:hello selector which will match pods in both the prod deployment and canary deployment. However, because the canary deployment has a fewer number of pods, it will be visible to fewer users.
modify the load balancers to point to that new version only after it has been fully deployed.
Kubernetes achieves this by creating two separate deployments; one for the old "blue" version and one for the new "green" version. Use your existing hello deployment for the "blue" version. The deployments will be accessed via a Service which will act as the router. Once the new "green" version is up and running, you'll switch over to using that version by updating the Service.
A major downside of blue-green deployments is that you will need to have at least 2x the resources in your cluster necessary to host your application.
Namespaces provides scope for naming resources such as pods,deployments, and controllers. Namespaces also let you implement resource quotas across the cluster. These quotas define limits for resource consumption within a namespace.
There are three initial namespaces in a cluster.
- default - The first is a default namespace for objects with no other namespace defined.
- system - kube system name space for objects created by the Kubernete system itself. When you use the kubectl command, by default,items in the kube system namespace are excluded, but you can choose to view its contents explicitly.
- kube-public - for objects that are publicly readable to all users.
Service is a static IP address that represent service or function in your infrastructure. Natwork abstraction of set of pods to deliver Service is a set of Pods and policy to acces that pods. Pods are selected using label selector. Whenewer service is created kubernetes create edpoints to selected pods by created enpoint resources. By default, the master assigns a virtual IP address, also known as a cluster IP. To the service from internal IP tables. With GKE, this is a sign from the clusters VPC network.
Overall, a service provides durable endpoints for Pods. These end points can be accessed by exposing the service internally, within a cluster Or externally to the outside world.