Kubernetes and its Advantages - kopite1987/gittest GitHub Wiki
To simply put, container orchestration is managing of containers at scale with deployment automation, auto-scaling, load-balancing and service discovery, resource management, etc. Or in simple words, container orchestrator provides a platform that orchestrates the connectivity between containers and automatically scale up or down based on the load. This process of automatically deploying and managing containers is known as container orchestration.
Kubernetes is a container orchestration technology. With Kubernetes, we can orchestrate the deployment and management of 100s to 1000s of containers in clustered environment within a matter of second. Our application will be fault-tolerant and highly available as our application won't be down since we have multiple instances of application running on different nodes.
Let's understand some basic terms that comes hand-in-hand with kubernetes.
- Nodes(Minions)- Nodes (known as minions in the past) is a worker machine (either physical or virtual) where Kubernetes is installed and the place where containers will be launched by kubernetes.
- Cluster - Cluster is a collection of nodes grouped together to achieve load sharing and high availability.
- Master - Responsible for managing the cluster. i.e members info, workload management,etc. Master manages other nodes in cluster and performs orchestration.
Lets say you have a system where you installed kubernetes. This means you installed the components such as an API server, an ETCD service, a kubelet service, a Container Runtime, Controllers and Schedulers.
API server is front-end. This means that the users, CLIs etc talk to the API server to interact with kubernetes cluster. ETCD key store is a distributed reliable key-value store to store all data used to manage the cluster. when you install kubernetes on multiple nodes, etc stores all information on all the nodes in the cluster in distributed manner along with multiple masters, and also implements locks to make sure there are no conflicts between masters. Scheduler distributes work or containers across multiple nodes and assigns freshly built containers to Nodes. Controllers (Brain of kubernetes cluster) manages when nodes, containers or endpoints goes down ( i.e decides when to bring containers up) Container Runtime is underlying software that is used to run containers (Docker in our case) which handles container lifecycle, creats and manages container processes. Kubelet is the agent that runs on each node in the cluster that makes sure that containers are running on nodes as expected.
So in a cluster we have Master node and worker node. Master node manages the workload and worker node is where the containers are hosted (i.e has container runtime and kubelet agent to talk to master and provides health information of worker node and carry actions requested by master node. ). Master server has kube-apiserver along with controller manager and the scheduler.
We have a command line utility called kubectl (also called kube control) that is used to deploy and manage application on a kubernetes cluster, gather cluster info, status of nodes etc. Some basic use cases are -
kubectl cluster-info ( this is used to view info about cluster) kubectl get pod ( lists all nodes as part of cluster)
Setting up Kubernetes:
Minicube (single instance setup) and Kubeadmin (multi-node setup) can be used to setup kubernetes on our laptops or virtual machines. Cloud service providers like AWS, GCP, Azure provide hosted solutions of kubernetes. For learning you can use the site https://labs.play-with-k8s.com/ Minicube bundles all different kubernetes components into a single image with pre-configured single node kubernetes cluster so we can start in matter of minutes. Minicube also provides and executable (minicube.exe) command line utility that will automatically download and deploy kubernetes in a virtualization platform ( like oracle virtualbox or vmware fusion or KVM). Only requirement is you must have hypervisor installed on your machine along with kubectl installed and minicube executable installed on your system.
For multi-node kubernetes cluster setup, you can use kubeadmin tool. Install container runtime (like Docker) along with kubeadmin tool on all nodes. Initialize the master server where required components are installed, configure POD network (special network between kubernetes master and worker nodes, then join the worker nodes to master node.
We are now ready to deploy our application in Kubernetes. Since we are using Docker container runtime, our application needs to be developed and built into Docker Images and available on registry so kubernetes can pull it down.
# HOW DOES THE KUBERNETES APPLICATION DEPLOYMENT WORK?
As we understood before, applications are deployed in worker nodes. But rather than directly deploying on worker nodes as a container, these containers are encapsulated into a POD. POD is a single instance of application. i.e within this encapsulated environment called POD, you will have docker containers of your application (built from your image pulled from registry)
to get pods info run kubectl get pods
# PODS
PODS can be created using YAML based configuration file. YAML file is used to represent configuration data (combination of key-values, array/lists or dictionary/map)
A pod definition yaml file in kubernetes sample must have these required fields.
apiVersion: kind: metadata: spec:
apiVersion - version of k8s apis we will use to create objects. Its a string kind - type of objects we are trying to create ( eg. Pod, service, ReplicaSet, Deployment). Its a string metadata - data above the object ( like name, label etc). Its a Dictionary dataset spec - is a dictionary. we add property here like containers (name, image) etc.