Kubernetes Offical Tutorial - zhuje/openshift-wiki GitHub Wiki
[x] Learn Kubernetes Basics [-] Configuration [-] Security [x] Stateless Applications [] Stateful Applications [] Services
https://kubernetes.io/docs/tutorials/
"Scaling out a Deployment" means running multiple instances of the same application.
The purpose of this is to distribute the workload to multiple Nodes. The Kubernetes Component Service
is built with a load-balancer to ensure work gets equally distributed between all the Nodes. Scaling schedules Nodes to be allocated and then Pods to be deployed in those Nodes. One of the key features of Kubernetes is automating scaling. Auto-scaling is available through Kubernetes but the tutorial below is how to scale out a deployment manually.
# start minikube
$ minikube start
# optional run web server for dashboard
$ minikube dashboard
# create your `Deployment` from an image
$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
# Check it's deployed
$ kubectl get deployments
# create a service at port :8080 on this deployment
$ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
# check the service is created
$ kubect get service
# Create 4 replicas of your deployment
$ kubectl scale deployments/kubernetes-bootcamp --replicas=4
# Check your deployments
$ kubectl get deployments
# Check 4 replica Pods have been created
$ kubectl get pods -o wide
# Check the Deployment event log to ensure these replicas were created (there should be 4 Pods with different IP addresses)
$ kubectl describe deployments/kubernetes-bootcamp
# scale down
$ kubectl scale deployments/kubernetes-bootcamp --replicas=2

# describe the service to get more information on the exact endpoints for the replicas
$ kubectl describe services/kubernetes-bootcamp
# create an environmental variable to
$ export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')"
# curl
$ curl http://"$(minikube ip):$NODE_PORT"

Source: https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/
"Rolling Updates" are performed by Kubernetes to ensure high availability, meaning the application is almost always available with no downtime. Users expect your applications to be accessible invariably. But how can developers upgrade or roll back versions if users also need constant access? Developers can stop the application, update it, then restart/deploy. If we want to give the user access continuously, we need to do 'rolling updates.' How do we do this?
- Pod1 contains our old application it will continue to service the user.
- Pod2 is created in the same node. It will hold our new application, but this still needs to be exposed to the user.
- After Pod2 installs our new application and is stable, it will signal to the Service it's ready to be exposed.
- The Service breaks it connects to Pod1 and connects to Pod2. The user is now accessing Pod2 and not Pod1.
- Pod1 is then terminated.

# set image deployment/<your-deployment> <your-container-name>=<your-new-image:tag>
$ kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2
# check if a new pod is being created and if the old one is being terminated
$ get pods
# check rollout status
$ kubectl rollout status deployments/kubernetes-bootcamp
# check the current image version on the app (Containers > Container-Name > Image )
$ kubectl describe pods
# undo rollout will revert back to last image
$ kubectl rollout undo deployments/kubernetes-bootcamp
# verify with the steps above
Source: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
Stateless vs Stateful Applications "The key difference between stateful and stateless applications is that stateless applications don't “store” data whereas stateful applications require backing storage." Source
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
- Find our service ingress
- Find our service port
- curl https://:

# create a namespace for our sandbox
oc new-project jz-test
# create a deployment
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
# check deployment and replicaSets have been created
kubectl get deployments hello-world
kubectl describe deployments hello-world
kubectl get replicasets
kubectl describe replicasets
# create a service
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
# check service is created
kubectl get services my-service
# Find the Service's Ingress and Port
# this will output the `LoadBalancer Ingress` and `Port` of the service
kubectl describe services my-service
# curl external IP address
# replace <external-ip> with `LoadBalancer Ingress` value (e.g. a8939abdfb07f405e8e1cb118a6ab8e6-1537860839.us-east-2.elb.amazonaws.com )
# replace <port> the `Port` value
# curl http://a8939abdfb07f405e8e1cb118a6ab8e6-1537860839.us-east-2.elb.amazonaws.com:8080
curl http://<external-ip>:<port>
# Clean up
kubectl delete services my-service
kubectl delete deployment hello-world
An API object that manages external access to the services in a cluster, typically HTTP. Source