U1.48 Ubuntu Quick Start (QS): Kubernetes with kubeadm and Docker on premises. HA cluster. - chempkovsky/CS2WPF-and-CS2XAMARIN GitHub Wiki

Reading

We start with

  • Pre-installed DHCP in the virtual environment (for example, a hardware implementation of a DHCP server in a modem)
  • Go to the page Ubuntu 20.04.3 LTS (Focal Fossa)
  • Download ubuntu-20.04.3-live-server-amd64.iso
  • Deploy three virtual machines with default settings (i.e. openssh is ON)
    • u2004s01 192.168.100.41
    • u2004s02 192.168.100.42
    • u2004s03 192.168.100.43
    • u2004s04 192.168.100.44
    • u2004s05 192.168.100.45
  • Sudo-enabled User
    • yury

Roles of virtual machines

Cluster name

  • We plan to name the cluster as follows
    • hardwaycluster.local

Virtual IP

  • 192.168.100.50

Load balancer with haproxy keepalived

Install haproxy keepalived binary for u2004s04 u2004s05

  • for u2004s04 u2004s05
sudo apt-get update
sudo apt-get install -y haproxy keepalived 

Configure u2004s04

  • for u2004s04
sudo nano /etc/keepalived/keepalived.conf
click to show the content of /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 101
    authentication {
        auth_type PASS
        auth_pass 42
    }
    virtual_ipaddress {
        192.168.100.50
    }
    track_script {
        check_apiserver
    }
}
  • for u2004s04
sudo nano /etc/keepalived/check_apiserver.sh
click to show the content of /etc/keepalived/check_apiserver.sh
errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
if ip addr | grep -q 192.168.100.50; then
    curl --silent --max-time 2 --insecure https://192.168.100.50:6443/ -o /dev/null || errorExit "Error GET https://192.168.100.50:6443/"
fi
  • for u2004s04
sudo nano /etc/haproxy/haproxy.cfg
click to show the content of /etc/keepalived/check_apiserver.sh
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s


frontend apiserver
    bind *:6443
    mode tcp
    option tcplog
    default_backend apiserver

backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    server u2004s01 192.168.100.41:6443 check
    server u2004s02 192.168.100.42:6443 check
  • for u2004s04
sudo systemctl enable haproxy --now
sudo systemctl enable keepalived --now
  • ip a returns additional IP= 192.168.100.50/32
click to show the response
yury@u2004s04:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:64:03:35 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.44/24 brd 192.168.100.255 scope global dynamic eth0
       valid_lft 255684sec preferred_lft 255684sec
    inet 192.168.100.50/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe64:335/64 scope link
       valid_lft forever preferred_lft forever

Configure u2004s05

  • for u2004s05
sudo nano /etc/keepalived/keepalived.conf
click to show the content of /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    authentication {
        auth_type PASS
        auth_pass 42
    }
    virtual_ipaddress {
        192.168.100.50
    }
    track_script {
        check_apiserver
    }
}
  • for u2004s05
sudo nano /etc/keepalived/check_apiserver.sh
click to show the content of /etc/keepalived/check_apiserver.sh
errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
if ip addr | grep -q 192.168.100.50; then
    curl --silent --max-time 2 --insecure https://192.168.100.50:6443/ -o /dev/null || errorExit "Error GET https://192.168.100.50:6443/"
fi
  • for u2004s05
sudo nano /etc/haproxy/haproxy.cfg
click to show the content of /etc/keepalived/check_apiserver.sh
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s


frontend apiserver
    bind *:6443
    mode tcp
    option tcplog
    default_backend apiserver

backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
    server u2004s01 192.168.100.41:6443 check
    server u2004s02 192.168.100.42:6443 check
  • for u2004s05
sudo systemctl enable haproxy --now
sudo systemctl enable keepalived --now
  • ip a does not return additional IP=192.168.100.50/32
click to show the response
yury@u2004s05:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:64:03:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.45/24 brd 192.168.100.255 scope global dynamic eth0
       valid_lft 255430sec preferred_lft 255430sec
    inet6 fe80::215:5dff:fe64:336/64 scope link
       valid_lft forever preferred_lft forever

Simple test

  • after turning off u2004s04
    • for u2004s05: ip a does return additional IP=192.168.100.50/32
click to show the response
yury@u2004s05:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:64:03:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.45/24 brd 192.168.100.255 scope global dynamic eth0
       valid_lft 255269sec preferred_lft 255269sec
    inet 192.168.100.50/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe64:336/64 scope link
       valid_lft forever preferred_lft forever
  • after restart ** u2004s04 **
    • for u2004s05: ip a does not return additional IP=192.168.100.50/32

Prepare Control Plane virtual machines

kubeadm init

  • for u2004s01
sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16 --control-plane-endpoint=192.168.100.50:6443
click to show the response
yury@u2004s01:~$ sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16 --control-plane-endpoint=192.168.100.50:6443
[sudo] password for yury:
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.hardwaycluster.local u2004s01] and IPs [10.96.0.1 192.168.100.41 192.168.100.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 42.643122 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: wov12t.2vl0rsbcjwf6glnw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.100.50:6443 --token wov12t.2vl0rsbcjwf6glnw \
        --discovery-token-ca-cert-hash sha256:ab33a0d814fa81e926742f398bb985635b07f15b702070ca854ab625798e70d4 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.50:6443 --token wov12t.2vl0rsbcjwf6glnw \
        --discovery-token-ca-cert-hash sha256:ab33a0d814fa81e926742f398bb985635b07f15b702070ca854ab625798e70d4

kubeadm join control plane

  • for u2004s02
sudo kubeadm join 192.168.100.50:6443 --token wov12t.2vl0rsbcjwf6glnw \
        --discovery-token-ca-cert-hash sha256:ab33a0d814fa81e926742f398bb985635b07f15b702070ca854ab625798e70d4 \
        --control-plane
  • there are errors in the responce:
    • we forgot to set --upload-certs-flag
    • we forgot to install Pod network add-on (calico in our case)
click to show the response
yury@u2004s02:~$ sudo kubeadm join 192.168.100.50:6443 --token wov12t.2vl0rsbcjwf6glnw         --discovery-token-ca-cert-hash sha256:ab33a0d814fa81e926742f398bb985635b07f15b702070ca854ab625798e70d4         --control-plane
[sudo] password for yury:
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1227 20:00:07.034676    7294 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.

[failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn't load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn't load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

kubeadm init a second time

nc -v 192.168.100.50 6443
  • for u2004s01
sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16 --control-plane-endpoint=192.168.100.50:6443 --upload-certs
click to show the response
yury@u2004s01:~$ sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16 --control-plane-endpoint=192.168.100.50:6443 --upload-certs
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.hardwaycluster.local u2004s01] and IPs [10.96.0.1 192.168.100.41 192.168.100.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 53.539222 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
bd4f443ea3678e558ee2a7b9ed730dd5a3ab95cfbba0b0a5ec7d0a8393d04034
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ramicz.oscmgq42bhb1fst2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.100.50:6443 --token ramicz.oscmgq42bhb1fst2 \
        --discovery-token-ca-cert-hash sha256:3f4e43e464c120637d5a591495c65ab66c5108bc754ae730293f868c77b93eb8 \
        --control-plane --certificate-key bd4f443ea3678e558ee2a7b9ed730dd5a3ab95cfbba0b0a5ec7d0a8393d04034

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.50:6443 --token ramicz.oscmgq42bhb1fst2 \
        --discovery-token-ca-cert-hash sha256:3f4e43e464c120637d5a591495c65ab66c5108bc754ae730293f868c77b93eb8

  • for u2004s01
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • for u2004s01
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  • for u2004s01
yury@u2004s01:~$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
u2004s01   Ready    control-plane,master   5m51s   v1.23.1   192.168.100.41   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12

kubeadm join control plane a second time

nc -v 192.168.100.50 6443
  • for u2004s02
sudo kubeadm join 192.168.100.50:6443 --token ramicz.oscmgq42bhb1fst2 \
    --discovery-token-ca-cert-hash sha256:3f4e43e464c120637d5a591495c65ab66c5108bc754ae730293f868c77b93eb8 \
     --control-plane --certificate-key bd4f443ea3678e558ee2a7b9ed730dd5a3ab95cfbba0b0a5ec7d0a8393d04034
click to show the response
yury@u2004s02:~$ sudo kubeadm join 192.168.100.50:6443 --token ramicz.oscmgq42bhb1fst2 \
>     --discovery-token-ca-cert-hash sha256:3f4e43e464c120637d5a591495c65ab66c5108bc754ae730293f868c77b93eb8 \
>      --control-plane --certificate-key bd4f443ea3678e558ee2a7b9ed730dd5a3ab95cfbba0b0a5ec7d0a8393d04034
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1228 10:28:28.122321    5726 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.hardwaycluster.local u2004s02] and IPs [10.96.0.1 192.168.100.42 192.168.100.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s02] and IPs [192.168.100.42 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s02] and IPs [192.168.100.42 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node u2004s02 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
  • for u2004s02
yury@u2004s02:~$  mkdir -p $HOME/.kube
yury@u2004s02:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
yury@u2004s02:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
yury@u2004s02:~$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
u2004s01   Ready    control-plane,master   31m    v1.23.1   192.168.100.41   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12
u2004s02   Ready    control-plane,master   3m8s   v1.23.1   192.168.100.42   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12

kubeadm join worker node

sudo kubeadm join 192.168.100.50:6443 --token ramicz.oscmgq42bhb1fst2 \
        --discovery-token-ca-cert-hash sha256:3f4e43e464c120637d5a591495c65ab66c5108bc754ae730293f868c77b93eb8
  • for u2004s01
yury@u2004s01:~$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
u2004s01   Ready    control-plane,master   50m   v1.23.1   192.168.100.41   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12
u2004s02   Ready    control-plane,master   21m   v1.23.1   192.168.100.42   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12
u2004s03   Ready    <none>                 65s   v1.23.1   192.168.100.43   <none>        Ubuntu 20.04.3 LTS   5.4.0-91-generic   docker://20.10.12

Certificates

yury@u2004s01:~$ cd /etc/kubernetes/pki
yury@u2004s01:/etc/kubernetes/pki$ ls -l
total 60
-rw-r--r-- 1 root root 1289 Dec 30 10:22 apiserver.crt
-rw-r--r-- 1 root root 1155 Dec 30 10:22 apiserver-etcd-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 apiserver-etcd-client.key
-rw------- 1 root root 1679 Dec 30 10:22 apiserver.key
-rw-r--r-- 1 root root 1164 Dec 30 10:22 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Dec 30 10:22 ca.crt
-rw------- 1 root root 1679 Dec 30 10:22 ca.key
drwxr-xr-x 2 root root 4096 Dec 30 10:22 etcd
-rw-r--r-- 1 root root 1115 Dec 30 10:22 front-proxy-ca.crt
-rw------- 1 root root 1679 Dec 30 10:22 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Dec 30 10:22 front-proxy-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 front-proxy-client.key
-rw------- 1 root root 1679 Dec 30 10:22 sa.key
-rw------- 1 root root  451 Dec 30 10:22 sa.pub

View ca.crt

  • for u2004s01
openssl x509 -noout -text -in ca.crt
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in ca.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Dec 30 10:22:21 2021 GMT
            Not After : Dec 28 10:22:21 2031 GMT
        Subject: CN = kubernetes
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:fd:c4:d1:08:c9:44:dd:97:ac:ef:d7:ae:74:46:
                    ... 
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment, Certificate Sign
            X509v3 Basic Constraints: critical
                CA:TRUE
            X509v3 Subject Key Identifier:
                3C:61:48:36:9A:E7:3A:1B:02:11:66:CA:BC:D2:C3:7D:FE:64:0E:B6
            X509v3 Subject Alternative Name:
                DNS:kubernetes
    Signature Algorithm: sha256WithRSAEncryption
         a1:e4:cf:86:74:47:b7:f5:4b:7a:18:51:17:5f:a8:8f:99:49:
         ... 
  • Note: Issuer == Subject

View front-proxy-ca.crt

  • for u2004s01
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in front-proxy-ca.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = front-proxy-ca
        Validity
            Not Before: Dec 30 10:22:23 2021 GMT
            Not After : Dec 28 10:22:23 2031 GMT
        Subject: CN = front-proxy-ca
        ...

View apiserver.crt

  • for u2004s01
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in apiserver.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2102938930753048706 (0x1d2f23d0cfe01082)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Dec 30 10:22:21 2021 GMT
            Not After : Dec 30 10:22:22 2022 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                ...

Automatic certificate renewal

Manual certificate renewal

Firewall

  • The firewall was off
sudo ufw disable
yury@u2004s01:~$ sudo ufw  status
Status: inactive
⚠️ **GitHub.com Fallback** ⚠️