Kubernetes - CloudCommandos/JohnChan GitHub Wiki

Kubernetes Overview

What a Container Orchestrator does that Docker Engine doesn't

A container Orchestrator can manage containers with awareness across multiple nodes while an instance of Docker can only manage containers residing on the same host as it. A container orchestrator handles higher level tasks inclusive of service-discovery, load balancing, container scheduling, and network policies while Docker Engine handles the actual operation execution on containers.

Orchestrators in the Wild

Orchestrator Description
Kubernetes No more than 5000 nodes, no more than 150000 total pods, no more than 300000 total containers, no more than 100 pods per node
Docker Swarm Can handle up to 30,000 containers and clusters of up to 1,000 nodes
Marathon
Amazon ECS
Azure Container Service
HashiCorp Nomad

How K8s Controllers Work

For a cluster of Kubernetes nodes, there are master nodes (at least one) and worker nodes. The master node is where the controller of the cluster resides in. The master node will not be hosting any container by default, that is the job of the worker nodes. The master node communicates with the worker nodes through the node agents called kubelets on each worker node. The pod specifications are communicated from the master node to the kubelets and the kubelets will instruct Docker accordingly to manage the containers of the pods.


Kubernetes Workload - WordPress and MariaDB

Set up Ceph Cluster

A persistent volume of type "local" is active on one worker node at a time, therefore data synchronization across pods on different nodes or during node migration is not possible without the use of 3rd party storage syncing solutions. There are many available options but for this task we will use Ceph. You'll need at least 3 OSDs for Ceph to report a healthy status. Make sure that all your nodes can be accessed via ssh from the admin-node.

On your Ceph Admin Node:
Install Ceph-Deploy. Change 'jewel' to your Ceph release.

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt install ceph-deploy

Create Ceph-Cluster directory

mkdir my-ceph
cd my-ceph

Create the Ceph cluster

ceph-deploy new admin-node

Edit ceph.conf file and add in the ceph network

nano ceph.conf

public network = 10.142.10.0/24

Install Ceph packages on your Ceph cluster nodes

ceph-deploy install admin-node ceph-node-2 ceph-node-3

Deploy initial monitors and create Ceph keys

ceph-deploy mon create-initial

Copy configuration files and keys to the nodes

ceph-deploy admin admin-node ceph-node-2 ceph-node-3

Create OSDs. Make sure that the disks are at least 6GB.

ceph-deploy osd create admin-node:/dev/sdb
ceph-deploy osd create ceph-node-2:/dev/sdb
ceph-deploy osd create ceph-node-3:/dev/sdb

Create Meta-data server for CephFS

ceph-deploy mds create admin-node

Add Monitors

ceph-deploy mon add ceph-node-2
ceph-deploy mon add ceph-node-3

Check the Ceph Cluster

ceph quorum_status --format json-pretty

Create CephFS

Create two storage pools

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128

Create CephFS using the two pools

ceph fs new cephfs cephfs_metadata cephfs_data

Check that CephFS is up

ceph mds stat

#e10: 1/1/1 up {0=admin-node=up:active}

Obtain the ceph admin key from the admin-node. Copy only the key.

ceph auth get client.admin

When using root user to run kubectl commands

export KUBECONFIG=/etc/kubernetes/admin.conf

Store your CephFS password

kubectl create secret generic cephfs-pass --from-literal=key=YOUR_KEY

Deploy Ingress for port 80 and 443 traffic handling

Work in a new directory

mkdir ~/kubeproj
cd ~/kubeproj

Create Ingress deployment file deployIngress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: subdomain1.commandocloudlet.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: wordpress
          servicePort: 80

Deploy Ingress

cd ~/kubeproj
kubectl create -f deployIngress.yml

We will create a Node Port listening on port 80/443 for our Ingress Controller. The default Node Port range is set to 30000-32767. Edit the kube-apiserver.yaml configuration file and overwrite the default range.

nano /etc/kubernetes/manifests/kube-apiserver.yaml

#add into spec -> containers -> command section
- --service-node-port-range=80-32767

Create Nginx Ingress Controller deployment file deployIngressController.yml. Nginx Ingress Controller routes traffic from port 80/443 to the defined service endpoints specified by Ingress.

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      nodePort: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
---

Deploy Ingress Controller

cd ~/kubeproj
kubectl create -f deployIngressController.yml

Deploy WordPress and MariaDB in Separate Pods

Store your MariabDB and WordPress passwords

kubectl create secret generic mariadb-pass --from-literal=password=YOUR_PASSWORD
kubectl create secret generic wordpress-pass --from-literal=password=YOUR_PASSWORD

Create MariaDB Deployment File deployMariaDB.yml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mariadb
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mariadb
  clusterIP: None
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: wordpress-mariadb-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: wordpress
      tier: mariadb
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 10.244.0.0/16
    - podSelector:
        matchLabels:
          app: wordpress
          tier: frontend
    ports:
    - protocol: TCP
      port: 3306
  egress:
  - to:
    - ipBlock:
        cidr: 10.244.0.0/16
    ports:
    - protocol: TCP
      port: 3306
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mariadb
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
      tier: mariadb
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mariadb
    spec:
      containers:
      - image: mariadb:latest
        name: mariadb
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mariadb-pass
              key: password
        ports:
        - containerPort: 3306
          name: mariadb
        volumeMounts:
        - name: mariadb-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mariadb-persistent-storage
        cephfs:
          monitors:
            - 10.142.10.1:6789
            - 10.142.10.2:6789
            - 10.142.10.3:6789
          user: admin
          secretRef:
            name: cephfs-pass
          #secretFile: "/etc/ceph/user.secret"
          readOnly: false
          path: "/"

Create WordPress Deployment File deployWordPress.yml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mariadb
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: wordpress-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        cephfs:
          monitors:
          - 10.142.10.1:6789
          - 10.142.10.2:6789
          - 10.142.10.3:6789
          user: admin
          secretRef:
            name: cephfs-pass
          readOnly: false
          path: "/"

Deploy MariaDB and WordPress

cd ~/kubeproj
kubectl create -f deployMariaDB.yml
kubectl create -f deployWordPress.yml

You can now access your WordPress website via your sub-domain/public IP, e.g. http://subdomain1.commandocloudlet.com.

Helpful Links:
How does Kubernetes work?
Installing Kubeadm
Creating a Single Master Cluster on Kubernetes
Creating MySQL and WordPress with Persistent Volume
Ceph-deploy Installation
Setting up CephFS
Kube-deploy issues with CephFS
Install Calico for Policy and Flannel for Networking
Nginx Ingress Controller

⚠️ **GitHub.com Fallback** ⚠️