03 Devops && Kubernetess notes - lukes8/wiki-notes GitHub Wiki

Microk8s, custom VM with linux on Windows via Multipass, Troubleshooting

[Multipass](https://multipass.run/) is the fastest way to create a complete Ubuntu virtual machine on Linux, Windows or macOS, and it’s a great base for using MicroK8s.

How to access to service url outside from host? How to fix it?
https://ubuntu.com/tutorials/getting-started-with-kubernetes-ha?&_ga=2.30574521.706180682.1699442280-582855309.1699442280#5-deploy-a-sample-containerised-application

Why it cannot join to node? 
ubuntu@microk8s-vm:~$ microk8s join 10.0.2.15:25000/0c6f060d16200c6be7ceaaea0bc32f88/b3d6ede72b4c
Contacting cluster at 10.0.2.15
Connection failed. The joining node has the same IP (10.0.2.15) as the node we contact. (503).

snap install microk8s --classic --channel=1.23/stable

Issue flow
https://github.com/canonical/microk8s/issues/3225

After installation multipass
multipass launch --name microk8s-vm --mem 4G --disk 40G

multipass list

multipass shell microk8s-vm
Shutdown the VM:

multipass stop microk8s-vm
Delete and cleanup the VM:

multipass delete microk8s-vm
multipass purge

Ingress, NorePort, port

the fundamental unit of Kubernetes is port,
and the port is on a node.
A node is nothing but a virtual server

kubectl get svc

Ingress
When we want to create something on top of this (NodePort see kubectl get svc)
to route the request to the appropriate microservices

Centralized configuration as ConfigMap


apiVersion: v1
data:
  CURRENCY_EXCHANGE_SERVICE_HOST: http://currency-exchange
kind: ConfigMap
metadata:
  name: currency-conversion-config-map
  namespace: default

Load balancing, service discovery DNS

So there are two important challenges with microservices
which are free from Kubernetes.
One is service discovery, the other one is load balancing.

readinessProbe, livenessProbe, health checks

readinessProbe - checks if container is ready for requests... there are max 5 checks every 10 sec see below
livenessProbe - checks if container is running in correct state (no failures, deadlock etc.)... there are max 5 checks... the container is restarted if health check fails after 5 checks

template:
    metadata:
      labels:
        app: currency-exchange
    spec:
      containers:
      - name: currency-exchange
        image: in28min/currency-exchange-devops
        imagePullPolicy: IfNotPresent
        ports:
        - name: liveness-port
          containerPort: 8000
        resources: #CHANGE
          requests:
            cpu: 100m
            memory: 512Mi
          limits:
            cpu: 500m
            memory: 1024Mi #256Mi 
        readinessProbe:
          httpGet:
            path: /
            port: liveness-port
          failureThreshold: 5
          periodSeconds: 10
          initialDelaySeconds: 60
        livenessProbe:
          httpGet:
            path: /
            port: liveness-port
          failureThreshold: 5
          periodSeconds: 10
          initialDelaySeconds: 60
      restartPolicy: Always
      terminationGracePeriodSeconds: 30

Cluster, sorting JSONPath, commands top, node etc.

Prints info about statistics CPU etc for nodes on clustera (lets say we have 3 nodes)
kubectl top node
kubectl cluster-info 
kubectl cluster-info dump
kubectl get svc --all-namespaces --sort-by=.metadata.name
kubectl get svc --all-namespaces --sort-by=.spec.type

Why to define your desired state in YAML?

The reason why K8s is very popular is because the k8s is declarative via yaml.

We can use one yaml file to describe our deployment, services etc.

Let's say, instead of this
kubectl create deployment hello-world-rest-api --image=in28min/hello-world-rest-api:0.0.1.RELEASE
kubectl expose deployment hello-world-rest-api --type=LoadBalancer --port=8080
kubectl scale deployment hello-world-rest-api --replicas=3
kubectl delete pod hello-world-rest-api-58ff5dd898-62l9d
kubectl autoscale deployment hello-world-rest-api --max=10 --cpu-percent=70
kubectl edit deployment hello-world-rest-api #minReadySeconds: 15
kubectl set image deployment hello-world-rest-api hello-world-rest-api=in28min/hello-world-rest-api:0.0.2.RELEASE

We can shortly use this
deployment.yaml (that contains also --- service spec)

Get events about k8s and what happens behind the scenes, rollout

kubectl get deployments -o wide
kubectl get events --sort-by=.metadata.creationTimestamp | less
kubectl get svc -o wide
kubectl get all -o wide
kubectl rollout history deployment/<app1>
kubectl logs -f <pod1>

Useful links, udemy wiki about k8s

https://github.com/lukes8/devops-master-class/tree/master/kubernetes

https://github.com/lukes8/devops-master-class/tree/master/kubernetes

Containers, docker

We can run not only Docker on K8s but also another containers that are compatible with OCI that is Open Container Interface

Load balancing, service, expose, pods

pods are serving the requests for specific service (we have some kind of service with type Load Balancer see GCP)

Load balancer - it's load balancing the traffic between pods we agreed on - for eg. we have 3 pods

K8s deployment strategies, rolling update

There are a variety of deployment strategies
when I'm releasing a new version of the application.
I might want to actually send 50% of traffic to v1
and 50% of traffic to v2
or I would want actually do
something like a rolling update,
where I want to first create one instance of v2,
test it, once it's fine.
I'll reduce the number of instances of v1.
After that I'll create a new instance of v2.
Once it's up and running,
I'll reduce the number of instances of v1
and so on and so forth

https://github.com/lukes8/devops-master-class/tree/master

GKE - google kubernetes engine

Kubernetess

https://www.kubermatic.com/blog/keeping-the-state-of-apps-1-introduction-to-volume-and-volumemounts/

KubeOne - https://github.com/kubermatic/kubeone#getting-started

⚠️ **GitHub.com Fallback** ⚠️