Deep‐Dive into K8s Architecture - pracip96/K8s-Learning GitHub Wiki
KUBE API SERVER Primary management component in K8s. kubectl reaches to API server, it authenticates the request and validates. then retrieves data from etcd cluster and responds with a request.
FLOW of CREATION of POD Request is authenticated first and validated API server creates a POD Object w/o assigning to NODE, updates the info in etcd server and updates the use that POD is created Scheduler continuously monitors the API server, observes there is a POD with no node assigned, it identifies the right NODE to place the POD on, communicates back to API server, API server updates the info on the etcd cluster. API server passes the info to the kubelet in the appropriate worker node. Kubelet then creates the POD on the node and instructs the CRI to deploy the app image. Once done, kubelet updates the info to the API server and API server updates the info on etcd cluster
kubeAPI server setup using kubeadmin Kubeadmin deploys api server as a POD in Kube-system ns In non-kubeadmin setup, we can inspect by viewing the kube api server service
KUBE CONTROLLER MANAGER Controller is a process continuously monitoring the state of various states. Node-Controller: Monitors the status of Nodes and keeps app running via api-server. This controller checks the status of nodes every 5seconds(NODE MONITOR PERIOD), if it stops receiving the heartbeat from node, node is unreachable, but it waits for 40 seconds(NODE MONITOR GRACE PERIOD) tomark itunreachable. After marked unreachable, it gives 5min(POD EVICTION TIMEOUT) for it to come back up. Else, it removes the POD assigned to unreachable node and provisions on healthy node if pods are part of replicaset
REPLICATION CONTROLLER Responsible the status of replica sets and ensure the desired no of pods in replicasets
ALL THE CONTROLLERS are in CONTROLLER MANAGER
KUBE-SCHEDULER Scheduler decides PODs on nodes by specific criteria. It looks at each and finds the NODE. Scheduler goes through two phases. FILTER NODE: Scheduler flushes Nodes that DO NOT fit the profile. Rejects the nodes that do not have compute. RANKS NODE: Scheduler also ranks the Nodes, uses priority function. Calculates the amount of resources after placing the POD.
KUBELET
Kubelet in worker node, registers the node with the k8s cluster. When it receives a req to load a pod, it req the CRE(docker)to pull the image and run the instance. Kubelet then continuous the status of the pod and reports to the KubeApi server on timely basis
KUBEPROXY Every POD can reach every other POD, this is accomplished by pod n/wing solution to the cluster. A pod n/w is internal vnet expands to all nodes to which pod connect. A webappon firts node and db on second node. App can talk to DB pod but there is no guarantee that IP will be same, ever. Better way to access db is service. Service also gets an IP. When ever a POD tries to reach the svc using IP/nam, it forwards the req to DB. Service cannot POD n/w because its not a container/ not a interface and just a virtual component in Cluster. this is where kubeproxy comes in. Its a process that runs on each node in the k8s cluster. Its job is to look for new services and every time an svc is created, it creates appropraite rules on ecah node to forward traffci to those svc to backend pods. One way is IP tabes rues.