K8s Essentials - degutos/wikis GitHub Wiki

K8s Essentials

How to Install a Kubernetes in cluster [ubuntu]

  1. Install Docker on all three nodes. Do the following on all three nodes:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo apt-mark hold docker-ce
  1. Verify that Docker is up and running with:
sudo systemctl status docker

Make sure the Docker service status is active (running)!

  1. Install Kubeadm, Kubelet, and Kubectl on all three nodes. Install the Kubernetes components by running this on all three nodes:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
sudo apt-mark hold kubelet kubeadm kubectl

Note: We need to disable swap on those 03 Nodes

sudo vim /etc/fstab
# comment the swapp line
sudo swapoff -a
  1. Bootstrap the cluster on the Kube master node. On the Kube master node, do this:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

That command may take a few minutes to complete.

When it is done, set up the local kubeconfig:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Take note that the kubeadm init command printed a long kubeadm join command to the screen. You will need that kubeadm join command in the next step!

Run the following commmand on the Kube master node to verify it is up and running:

kubectl version

This command should return both a Client Version and a Server Version.

  1. Join the two Kube worker nodes to the cluster. Copy the kubeadm join command that was printed by the kubeadm init command earlier, with the token and hash. Run this command on both worker nodes, but make sure you add sudo in front of it:
sudo kubeadm join $some_ip:6443 --token $some_token --discovery-token-ca-cert-hash $some_hash

Now, on the Kube master node, make sure your nodes joined the cluster successfully:

kubectl get nodes

Verify that all three of your nodes are listed. It will look something like this:

NAME            STATUS     ROLES    AGE   VERSION
ip-10-0-1-101   NotReady   master   30s   v1.12.2
ip-10-0-1-102   NotReady   <none>   8s    v1.12.2
ip-10-0-1-103   NotReady   <none>   5s    v1.12.2

Note that the nodes are expected to be in the NotReady state for now.

  1. Set up cluster networking with flannel.

Turn on iptables bridge calls on all three nodes:

echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
  1. Next, run this only on the Kube master node. This is a network plugin for the Cluster networking
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Note: The above command is to install the Weave Pod network plugin

Then check again your nodes if they are Ready

kubectl get nodes

After a short time, all three nodes should be in the Ready state. If they are not all Ready the first time you run kubectl get nodes, wait a few moments and try again. It should look something like this:

NAME            STATUS   ROLES    AGE   VERSION
ip-10-0-1-101   Ready    master   85s   v1.12.2
ip-10-0-1-102   Ready    <none>   63s   v1.12.2
ip-10-0-1-103   Ready    <none>   60s   v1.12.2

Deploying a Simple Service to Kubernetes

Create a deployment for the store-products service with four replicas.

  1. Log in to the Kube master node.
  2. Create the deployment with four replicas:
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: store-products
  labels:
    app: store-products
spec:
  replicas: 4
  selector:
    matchLabels:
      app: store-products
  template:
    metadata:
      labels:
        app: store-products
    spec:
      containers:
      - name: store-products
        image: linuxacademycontent/store-products:1.0.0
        ports:
        - containerPort: 80
EOF

Create a store-products service and verify that you can access it from the busybox testing pod.

  1. Create a service for the store-products pods:
cat << EOF | kubectl apply -f -
kind: Service
apiVersion: v1
metadata:
  name: store-products
spec:
  selector:
    app: store-products
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
EOF
  1. Make sure the service is up in the cluster:
 kubectl get svc store-products


NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
store-products   ClusterIP   10.104.11.230   <none>        80/TCP    59s

Use kubectl exec to query the store-products service from the busybox testing pod.

kubectl exec busybox -- curl -s store-products

Deploy the Stan's Robot Shop app to the cluster.

Clone the Git repo that contains the pre-made descriptors:

cd ~/
git clone https://github.com/linuxacademy/robot-shop.git

Since this application has many components, it is a good idea to create a separate namespace for the app:

kubectl create namespace robot-shop

Deploy the app to the cluster:

kubectl -n robot-shop create -f ~/robot-shop/K8s/descriptors/

Check the status of the application's pods:

kubectl get pods -n robot-shop

You should be able to reach the robot shop app from your browser using the Kube master node's public IP:

http://$kube_master_public_ip:30080

Scale up the MongoDB deployment to two replicas instead of just one.

Edit the deployment descriptor:

kubectl edit deployment mongodb -n robot-shop

You should see some YAML describing the deployment object.

Under spec:, look for the line that says replicas: 1 and change it to replicas: 2. Save and exit. Check the status of the deployment with:

kubectl get deployment mongodb -n robot-shop

After a few moments, the number of available replicas should be 2.

Done!

⚠️ **GitHub.com Fallback** ⚠️