U1.46 Ubuntu Quick Start (QS): Kubernetes the hard way on premises. - chempkovsky/CS2WPF-and-CS2XAMARIN GitHub Wiki

Reading

We start with

  • Pre-installed DHCP in the virtual environment (for example, a hardware implementation of a DHCP server in a modem)
  • Go to the page Ubuntu 20.04.3 LTS (Focal Fossa)
  • Download ubuntu-20.04.3-live-server-amd64.iso
  • Deploy five virtual machines with default settings (i.e. openssh is ON)
    • u2004s01 192.168.100.41
    • u2004s02 192.168.100.42
    • u2004s03 192.168.100.43
    • u2004s04 192.168.100.44
    • u2004s05 192.168.100.45
  • Sudo-enabled User
    • yury

Roles of virtual machines

Cluster name

  • We plan to name the cluster as follows
    • hardwaycluster.local

Virtual IP

  • 192.168.100.50

Default Kubernetes API server port

  • 6443

pod network cidr

  • 10.32.0.0/16

Service Subnet:

  • 10.96.0.0/12

Create load balancer

Prepare u2004s01 u2004s02 u2004s03 to add password to root

  • Step 1:
    • run the commands to set the password for root-user
sudo -i
passwd
  • Step 2:

    • with sudo nano /etc/ssh/sshd_config modify the file
      • set: PermitRootLogin yes
  • Step 3:

    • run the command
sudo service ssh restart

Prepare u2004s01 u2004s02 u2004s03 to disabe swapping

  • Note: We don't need to disable swapping for u2004s01 as Kubelet won't run on this VM.

Step 1: Check swapping status

yury@u2004s01:~$ free - h
              total        used        free      shared  buff/cache   available
Mem:        2998648     2236376      393964        1028      368308      606276
Swap:       2998268           0     2998268

yury@u2004s01:~$ sudo swapon --show
NAME      TYPE SIZE USED PRIO
/swap.img file 2.9G   0B   -2

Step 2: To disable the swap immediately run the command

sudo swapoff -a

Step 3: To disable the swap permanently

  • run the command
 sudo nano /etc/fstab
  • comment the swapp setting
click to show the content
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-XzKkjySl3CbvTCZVfDRwjBQzDpNBEUbgdviK8oCFW8xx90qmcJNecIqMWed95Zri / ext4>
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/732e23ee-6ed0-4575-8104-943235f32cc2 /boot ext4 defaults 0 1
# /swap.img     none    swap    sw      0       0
  • reboot the host (or virtual machine)

Installing the Client Tools

  • Note: The client tools should only be installed on the virtual machines that will be used to administer the cluster.
    • We decided that only u2004s01 would be used to administer the cluster. Thus, cfssl and kubectl should only be installed on u2004s01 machine.

Install cfssl

sudo apt-get update -y
sudo apt install golang-cfssl

Install kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(<kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
yury@u2004s01:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Working folders

  • for u2004s01
    • create folder to generate all the files we need for the cluster
mkdir gencfg
  • for u2004s01, u2004s02, u2004s02
    • create folder for the files that we need for this virtual machine
mkdir kbcfg

Generate certificates

make gencfg the current folder

cd gencfg

Certificate Authority

  • create file
nano ca-config.json
click to show the content of ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
  • create file
nano ca-csr.json
click to show the content of ca-csr.json
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate and private key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

Admin Client Certificate

  • create file
nano admin-csr.json
click to show the content of admin-csr.json
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

Kubelet Client Certificate for u2004s02

  • create file
nano kubelet-u2004s02-csr.json
click to show the content of kubelet-u2004s02-csr.json
{
  "CN": "system:node:u2004s02",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=u2004s02,127.0.0.1,192.168.100.42 -profile=kubernetes kubelet-u2004s02-csr.json | cfssljson -bare kubelet-u2004s02

Kubelet Client Certificate for u2004s03

  • create file
nano kubelet-u2004s03-csr.json
click to show the content of kubelet-u2004s03-csr.json
{
  "CN": "system:node:u2004s03",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=u2004s03,127.0.0.1,192.168.100.43 -profile=kubernetes kubelet-u2004s03-csr.json | cfssljson -bare kubelet-u2004s03

Controller Manager Client Certificate

  • create file
nano kube-controller-manager-csr.json
click to show the content of kube-controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

Kube Proxy Client Certificate

  • create file
nano kube-proxy-csr.json
click to show the content of kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

Scheduler Client Certificate

  • create file
nano kube-scheduler-csr.json
click to show the content of kube-scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

API Server Certificate

  • create file
nano kubernetes-csr.json
click to show the content of kubernetes-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=10.32.0.1,192.168.100.41,192.168.100.42,192.168.100.50,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

Service Account Key Pair

  • create file
nano service-account-csr.json
click to show the content of kubernetes-csr.json
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
  • generate certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes service-account-csr.json | cfssljson -bare service-account

Generate Configuration Files

make gencfg the current folder

cd gencfg

kubelet Configuration File for u2004s02

  • generate config
kubectl config set-cluster hardwaycluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.100.50:6443 --kubeconfig=kubelet_u2004s02.kubeconfig
kubectl config set-credentials system:node:u2004s02 --client-certificate=kubelet-u2004s02.pem --client-key=kubelet-u2004s02-key.pem --embed-certs=true --kubeconfig=kubelet_u2004s02.kubeconfig
kubectl config set-context default --cluster=hardwaycluster.local --user=system:node:u2004s02 --kubeconfig=kubelet_u2004s02.kubeconfig
kubectl config use-context default --kubeconfig=kubelet_u2004s02.kubeconfig

kubelet Configuration File for u2004s03

  • generate config
kubectl config set-cluster hardwaycluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.100.50:6443 --kubeconfig=kubelet_u2004s03.kubeconfig
kubectl config set-credentials system:node:u2004s03 --client-certificate=kubelet-u2004s03.pem --client-key=kubelet-u2004s03-key.pem --embed-certs=true --kubeconfig=kubelet_u2004s03.kubeconfig
kubectl config set-context default --cluster=hardwaycluster.local --user=system:node:u2004s03 --kubeconfig=kubelet_u2004s03.kubeconfig
kubectl config use-context default --kubeconfig=kubelet_u2004s03.kubeconfig

kube-proxy Configuration

  • generate config
kubectl config set-cluster hardwaycluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.100.50:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=hardwaycluster.local --user=system:kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

kube-controller-manager Configuration

  • generate config
kubectl config set-cluster cluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default --cluster=cluster.local --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

kube-scheduler Configuration

  • generate config
kubectl config set-cluster cluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default --cluster=cluster.local --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

admin Configuration

  • generate config
kubectl config set-cluster cluster.local --certificate-authority=ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
kubectl config set-context default --cluster=cluster.local --user=admin --kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig

Generating the Data Encryption Config and Key

  • for u2004s01

make gencfg the current folder

cd ~/gencfg

run the commands

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Distribute Certificates

  • for u2004s01

make gencfg the current folder

cd gencfg

copy files

scp ca.pem kubelet-u2004s02-key.pem  kubelet-u2004s02.pem [email protected]:~/kbcnf/
scp ca.pem kubelet-u2004s03-key.pem  kubelet-u2004s03.pem [email protected]:~/kbcnf/
cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem ~/kbcnf/
scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem [email protected]:~/kbcnf/

Distribute Configuration Files

  • for u2004s01

make gencfg the current folder

cd gencfg

copy files

scp  kubelet_u2004s02.kubeconfig kube-proxy.kubeconfig [email protected]:~/kbcnf/
scp  kubelet_u2004s03.kubeconfig kube-proxy.kubeconfig [email protected]:~/kbcnf/
cp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ~/kbcnf/
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig [email protected]:~/kbcnf/

Distribute Encryption Config File

  • for u2004s01

make gencfg the current folder

cd ~/gencfg

copy files

cp encryption-config.yaml ~/kbcnf/
scp encryption-config.yaml [email protected]:~/kbcnf/

Bootstrapping the etcd Cluster

Install Binaries

  • for u2004s01 and u2004s02 run the commands:
wget -q --show-progress --https-only --timestamping "https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz"
tar -xvf  etcd-v3.5.0-linux-amd64.tar.gz
sudo mv etcd-v3.5.0-linux-amd64/etcd* /usr/local/bin/

yury@u2004s01:~$ etcd --version
etcd Version: 3.5.0
Git SHA: 946a5a6f2
Go Version: go1.16.3
Go OS/Arch: linux/amd64

sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
cd  kbcnf
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

for u2004s01

sudo nano /etc/systemd/system/etcd.service
insert the following content for u2004s01
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name u2004s01 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.100.41:2380 \
  --listen-peer-urls https://192.168.100.41:2380 \
  --listen-client-urls https://192.168.100.41:2379,https://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.100.41:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster u2004s01=https://192.168.100.41:2380,u2004s02=https://192.168.100.42:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

for u2004s02

sudo nano /etc/systemd/system/etcd.service
insert the following content for u2004s01
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \
  --name u2004s02 \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://192.168.100.42:2380 \
  --listen-peer-urls https://192.168.100.42:2380 \
  --listen-client-urls https://192.168.100.42:2379,https://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.100.42:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster u2004s01=https://192.168.100.41:2380,u2004s02=https://192.168.100.42:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Start the etcd Server

  • for u2004s01 and u2004s02 run the commands
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Test etcd cluster

  • for u2004s01 or u2004s02 run the command
yury@u2004s02:~/kbcnf$ sudo etcdctl member list --insecure-skip-tls-verify --cert /etc/etcd/kubernetes.pem --key /etc/etcd/kubernetes-key.pem
bc674b955f5d44b5, started, u2004s01, https://192.168.100.41:2380, https://192.168.100.41:2379, false
cef94a65fab26cf2, started, u2004s02, https://192.168.100.42:2380, https://192.168.100.42:2379, false
  • or run the command
yury@u2004s02:~/kbcnf$ sudo etcdctl member list --cacert=/etc/etcd/ca.pem --cert /etc/etcd/kubernetes.pem --key /etc/etcd/kubernetes-key.pem
bc674b955f5d44b5, started, u2004s01, https://192.168.100.41:2380, https://192.168.100.41:2379, false
cef94a65fab26cf2, started, u2004s02, https://192.168.100.42:2380, https://192.168.100.42:2379, false

Bootstrapping Control Plane Components

Install Control Plane Components Binaries

  • for u2004s01 and u2004s02 run the command
sudo mkdir -p /etc/kubernetes/config

wget -q --show-progress --https-only --timestamping "https://dl.k8s.io/v1.23.1/kubernetes-server-linux-amd64.tar.gz"
tar -xvf  kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Configure Kubernetes API Server

  • for u2004s01 and u2004s02 run the command
sudo mkdir -p /var/lib/kubernetes/
cd ~/kbcnf
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml /var/lib/kubernetes/
  • for u2004s01
sudo nano /etc/systemd/system/kube-apiserver.service
click to show the content
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=192.168.100.41 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://192.168.100.41:2379,https://192.168.100.42:2379 \
  --event-ttl=1h \
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --runtime-config='api/all=true' \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-account-issuer=https://192.168.100.50:6443 \
  --service-cluster-ip-range=10.96.0.0/12 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • for u2004s02
sudo nano /etc/systemd/system/kube-apiserver.service
click to show the content
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=192.168.100.42 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://192.168.100.41:2379,https://192.168.100.42:2379 \
  --event-ttl=1h \
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --runtime-config='api/all=true' \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-account-issuer=https://192.168.100.50:6443 \
  --service-cluster-ip-range=10.96.0.0/12 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Configure the Kubernetes Controller Manager

  • for u2004s01 and u2004s02 run the command
cd ~/kbcnf
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
  • for u2004s01 and u2004s02
sudo nano /etc/systemd/system/kube-controller-manager.service
click to show the content
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --bind-address=0.0.0.0 \
  --cluster-cidr=10.32.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
  --leader-elect=true \
  --root-ca-file=/var/lib/kubernetes/ca.pem \
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-cluster-ip-range=10.96.0.0/12 \
  --use-service-account-credentials=true \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Configure the Kubernetes Scheduler

  • for u2004s01 and u2004s02
cd ~/kbcnf
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
  • for u2004s01 and u2004s02
sudo nano /etc/kubernetes/config/kube-scheduler.yaml
click to show the content
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
  • for u2004s01 and u2004s02
sudo nano /etc/systemd/system/kube-scheduler.service
click to show the content
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Start the Controller Services

  • for u2004s01 and u2004s02
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Enable HTTP Health Checks

sudo apt-get update
sudo apt-get install -y nginx
  • for u2004s01 and u2004s02
cd ~/kbcnf

cat > kubernetes.default.svc.cluster.local <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF

sudo mv kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
sudo systemctl restart nginx
sudo systemctl enable nginx

Verification

  • for u2004s01 and u2004s02
yury@u2004s01:~$ cd ~/kbcnf
yury@u2004s01:~/kbcnf$ kubectl cluster-info --kubeconfig admin.kubeconfig
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

yury@u2004s01:~/kbcnf$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 22 Dec 2021 13:53:28 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Audit-Id: 31842d38-7b98-48c8-bf95-c502d6e4a8cc
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff


yury@u2004s02:~$ cd ~/kbcnf
yury@u2004s02:~/kbcnf$ kubectl cluster-info --kubeconfig admin.kubeconfig
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


yury@u2004s02:~/kbcnf$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 22 Dec 2021 13:44:04 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Audit-Id: 0cbafd67-9dbe-464f-9d62-cc632c4df0cf
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff

RBAC for Kubelet Authorization

  • for u2004s01 or u2004s01 (but not for both)
cd ~/kbcnf

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF
  • for u2004s01 or u2004s02 (but not for both)
cd ~/kbcnf

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

Bootstrapping Node Components

Install socat binary

  • for u2004s02 and u2004s03
sudo apt-get update
sudo apt-get -y install socat conntrack ipset

Install cri-tools binary

  • goto releases and click the latest
    • in our case the latest release is v1.22.0
  • get the link you need
    • in our case it is crictl-v1.22.0-linux-amd64.tar.gz
  • for u2004s02 and u2004s03
wget -q --show-progress --https-only --timestamping https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz
tar -xvf  crictl-v1.22.0-linux-amd64.tar.gz
chmod +x crictl
sudo mv crictl /usr/local/bin/

Install runc binary

  • goto releases and click the latest
    • in our case the latest release is runc 1.0.3
  • get the link you need
    • in our case it is runc.amd64
  • for u2004s02 and u2004s03
wget -q --show-progress --https-only --timestamping https://github.com/opencontainers/runc/releases/download/v1.0.3/runc.amd64
sudo mv runc.amd64 runc
chmod +x runc 
sudo mv runc /usr/local/bin/

Install CNI Plugins binary

  • goto releases and click the latest
    • in our case the latest release is v1.0.1
  • get the link you need
    • in our case it is cni-plugins-linux-amd64-v1.0.1.tgz
  • for u2004s02 and u2004s03
sudo mkdir -p /etc/cni/net.d /opt/cni/bin
wget -q --show-progress --https-only --timestamping https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz
sudo tar -xvf cni-plugins-linux-amd64-v1.0.1.tgz -C /opt/cni/bin/

Install containerd binary

  • goto releases and click the latest
    • in our case the latest release is v1.5.8
  • get the link you need
    • in our case it is containerd-1.5.8-linux-amd64.tar.gz
  • Note: there is also containerd-cni. Why is this?
  • for u2004s02 and u2004s03
mkdir containerd
wget -q --show-progress --https-only --timestamping https://github.com/containerd/containerd/releases/download/v1.5.8/containerd-1.5.8-linux-amd64.tar.gz
tar -xvf containerd-1.5.8-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/

Create folders

  • for u2004s02 and u2004s03
sudo mkdir -p \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

Install kubectl kube-proxy kubelet binary

wget -q --show-progress --https-only --timestamping "https://dl.k8s.io/v1.23.1/kubernetes-server-linux-amd64.tar.gz"
tar -xvf  kubernetes-server-linux-amd64.tar.gz
  • for u2004s02
cd ~/kubernetes/server/bin/
chmod +x kube-proxy kubelet 
sudo mv kube-proxy kubelet /usr/local/bin/
  • for u2004s03
cd ~/kubernetes/server/bin/
chmod +x kubectl kube-proxy kubelet 
sudo mv kubectl kube-proxy kubelet /usr/local/bin/

Configure CNI

  • for u2004s02 and u2004s03
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.4.0",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "10.32.0.0/16"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF
  • for u2004s02 and u2004s03
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.4.0",
    "name": "lo",
    "type": "loopback"
}
EOF

Configure containerd

  • for u2004s02 and u2004s03
sudo mkdir -p /etc/containerd/
  • for u2004s02 and u2004s03
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
EOF
  • for u2004s02 and u2004s03
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubelet for u2004s02

  • for u2004s02
    • sudo mv ca.pem /var/lib/kubernetes/ already done for u2004s02
cd ~/kbcnf
sudo mv kubelet-u2004s02-key.pem kubelet-u2004s02.pem /var/lib/kubelet/
sudo mv kubelet_u2004s02.kubeconfig /var/lib/kubelet/kubeconfig
  • for u2004s02
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "10.32.0.0/16"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet-u2004s02.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-u2004s02-key.pem"
EOF
  • for u2004s02
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubelet for u2004s03

  • for u2004s03
cd ~/kbcnf
sudo mv ca.pem /var/lib/kubernetes/
sudo mv kubelet-u2004s03-key.pem kubelet-u2004s03.pem /var/lib/kubelet/
sudo mv kubelet_u2004s03.kubeconfig /var/lib/kubelet/kubeconfig
  • for u2004s03
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "10.32.0.0/16"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet-u2004s03.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-u2004s03-key.pem"
EOF
  • for u2004s03
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Proxy

cd ~/kbcnf
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
  • for u2004s02 and u2004s03
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
  • for u2004s02 and u2004s03
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start Services

  • for u2004s01 or u2004s02
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy

Verification

  • for u2004s01 or u2004s02
cd ~/kbcnf
kubectl get nodes --kubeconfig admin.kubeconfig
yury@u2004s01:~/kbcnf$ kubectl get nodes --kubeconfig admin.kubeconfig
NAME       STATUS     ROLES    AGE     VERSION
u2004s02   NotReady   <none>   4m35s   v1.23.1
u2004s03   Ready      <none>   118s    v1.23.1
  • Troubleshoot:
    • We went through the deployment steps and prepared this article in parallel.
      • As a result, something was forgotten to do for u2004s02
  • After repeating all "Bootstrapping Node Components"-steps for u2004s02:
yury@u2004s02:~/kbcnf$ kubectl get nodes --kubeconfig admin.kubeconfig
NAME       STATUS   ROLES    AGE   VERSION
u2004s02   Ready    <none>   29m   v1.23.1
u2004s03   Ready    <none>   26m   v1.23.1


yury@u2004s01:~/kbcnf$ kubectl get nodes --kubeconfig admin.kubeconfig
NAME       STATUS   ROLES    AGE   VERSION
u2004s02   Ready    <none>   33m   v1.23.1
u2004s03   Ready    <none>   30m   v1.23.1

Deploying the DNS Cluster Add-on

⚠️ **GitHub.com Fallback** ⚠️