U1.47 Ubuntu Quick Start (QS): Kubernetes with kubeadm and Docker on premises. Single control plane. - chempkovsky/CS2WPF-and-CS2XAMARIN GitHub Wiki
- read the article Install Tools
- read the article Creating a cluster with kubeadm
- Pre-installed DHCP in the virtual environment (for example, a hardware implementation of a DHCP server in a modem)
- Go to the page Ubuntu 20.04.3 LTS (Focal Fossa)
- Download ubuntu-20.04.3-live-server-amd64.iso
- Deploy three virtual machines with default settings (i.e. openssh is ON)
- u2004s01 192.168.100.41
- u2004s02 192.168.100.42
- u2004s03 192.168.100.43
- Sudo-enabled User
- yury
- We plan to use virtual machines as follows
- u2004s01
- u2004s02
- u2004s03
- We plan to name the cluster as follows
- hardwaycluster.local
-
Step 1:
- run the commands to set the password for root-user
sudo -i
passwd
-
Step 2:
- with sudo nano /etc/ssh/sshd_config modify the file
- set: PermitRootLogin yes
- with sudo nano /etc/ssh/sshd_config modify the file
-
Step 3:
- run the command
sudo service ssh restart
yury@u2004s02:~$ free - h
total used free shared buff/cache available
Mem: 2998648 2236376 393964 1028 368308 606276
Swap: 2998268 0 2998268
yury@u2004s02:~$ sudo swapon --show
NAME TYPE SIZE USED PRIO
/swap.img file 2.9G 0B -2
sudo swapoff -a
- run the command
sudo nano /etc/fstab
- comment the swapp setting
click to show the content
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-XzKkjySl3CbvTCZVfDRwjBQzDpNBEUbgdviK8oCFW8xx90qmcJNecIqMWed95Zri / ext4>
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/732e23ee-6ed0-4575-8104-943235f32cc2 /boot ext4 defaults 0 1
# /swap.img none swap sw 0 0
- reboot the host (or virtual machine)
- for u2004s01 u2004s02 u2004s03
sudo sysctl --system
- having a response
...
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
...
sudo nano /usr/lib/sysctl.d/50-default.conf
- comment out the line:
# -net.ipv4.conf.all.promote_secondaries
- or set the value for net.ipv4.conf.all.promote_secondaries:
-net.ipv4.conf.all.promote_secondaries = 1
- read the article Letting iptables see bridged traffic
- read the article [2.3.5.3 br_netfilter Module] https://docs.oracle.com/en/operating-systems/olcne/1.1/start/netfilter.html
- Verify if it sees
- in our case, the command below does not return anything
sudo lsmod | grep br_netfilter
- for u2004s01 u2004s02 u2004s03
sudo modprobe br_netfilter
- for u2004s01 u2004s02 u2004s03
sudo nano /etc/modules-load.d/br_netfilter.conf
- insert the line below and rebot
br_netfilter
- read the article Cgroup v2
- read the article Introducing Docker Engine 20.10
- read the article How to enable Control Group v2
- read the article cgroup v2
- read the article Identifying a kernel
- for u2004s01 u2004s02 u2004s03
- in our case it is not:
yury@u2004s01:~$ cat /sys/fs/cgroup/cgroup.controllers
cat: /sys/fs/cgroup/cgroup.controllers: No such file or directory
yury@u2004s01:~$ stat -c %T -f /sys/fs/cgroup
tmpfs
- for u2004s01 u2004s02 u2004s03
- in our case Host Requirements are satisfied
yury@u2004s02:~$ cat /proc/version_signature
Ubuntu 5.4.0-91.102-generic 5.4.151
- for u2004s01 u2004s02 u2004s03
cat /etc/default/grub | grep GRUB_CMDLINE_LINUX=
GRUB_CMDLINE_LINUX=""
sudo sed -i -e 's/^GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=1"/' /etc/default/grub
sudo update-grub
sudo reboot
- for u2004s01 u2004s02 u2004s03
yury@u2004s02:~$ stat -c %T -f /sys/fs/cgroup
cgroup2fs
yury@u2004s02:~$ cat /sys/fs/cgroup/cgroup.controllers
cpuset cpu io memory pids rdma
- for u2004s01 u2004s02 u2004s03
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
- read the article Docker
- for u2004s01 u2004s02 u2004s03
sudo mkdir /etc/docker
sudo cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
- for u2004s01 u2004s02 u2004s03
sudo docker version
click to show the response
Client: Docker Engine - Community
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:45:33 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:42 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
- read the article Letting iptables see bridged traffic
- for u2004s01
sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
- for u2004s01
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- read the article Letting iptables see bridged traffic
- for u2004s02 u2004s03
sudo cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
- for u2004s02 u2004s03
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm
sudo apt-mark hold kubelet kubeadm
- for u2004s01
sudo kubeadm config print init-defaults
click to show the response
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
- Version: 1.23.0
- serviceSubnet: 10.96.0.0/12
- dnsDomain: cluster.local
- read the article Initializing your control-plane node
- Required Pod network add-on will be installed after kubeadm init
- for u2004s01
sudo kubeadm init
click to show the response
yury@u2004s01:~$ sudo kubeadm init
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local u2004s01] and IPs [10.96.0.1 192.168.100.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 44.711273 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zarvz8.a2n38u9eu86uynqi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.41:6443 --token zarvz8.a2n38u9eu86uynqi \
--discovery-token-ca-cert-hash sha256:5da955b329854f9a76df727a1f194b16c193965e366c76a5b50441924113f2dd
- we ignore
- [WARNING SystemVerification]: missing optional cgroups: hugetlb
- read the article More information
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
the response of kubeadm init has the following lines
- [certs] Using certificateDir folder "/etc/kubernetes/pki"
- [control-plane] Using manifest folder "/etc/kubernetes/manifests"
- [control-plane] Creating static Pod manifest for "kube-apiserver"
- [control-plane] Creating static Pod manifest for "kube-controller-manager"
- [control-plane] Creating static Pod manifest for "kube-scheduler"
- [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
- [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
-
for u2004s01
yury@u2004s01:~$ sudo ls -l /etc/kubernetes/pki/
total 60
-rw-r--r-- 1 root root 1285 Dec 26 17:39 apiserver.crt
-rw-r--r-- 1 root root 1155 Dec 26 17:39 apiserver-etcd-client.crt
-rw------- 1 root root 1675 Dec 26 17:39 apiserver-etcd-client.key
-rw------- 1 root root 1675 Dec 26 17:39 apiserver.key
-rw-r--r-- 1 root root 1164 Dec 26 17:39 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Dec 26 17:39 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Dec 26 17:39 ca.crt
-rw------- 1 root root 1679 Dec 26 17:39 ca.key
drwxr-xr-x 2 root root 4096 Dec 26 17:39 etcd
-rw-r--r-- 1 root root 1115 Dec 26 17:39 front-proxy-ca.crt
-rw------- 1 root root 1679 Dec 26 17:39 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Dec 26 17:39 front-proxy-client.crt
-rw------- 1 root root 1679 Dec 26 17:39 front-proxy-client.key
-rw------- 1 root root 1679 Dec 26 17:39 sa.key
-rw------- 1 root root 451 Dec 26 17:39 sa.pub
yury@u2004s01:~$ sudo ls -l /etc/kubernetes/manifests/
total 16
-rw------- 1 root root 2225 Dec 26 17:39 etcd.yaml
-rw------- 1 root root 4014 Dec 26 17:39 kube-apiserver.yaml
-rw------- 1 root root 3401 Dec 26 17:39 kube-controller-manager.yaml
-rw------- 1 root root 1435 Dec 26 17:39 kube-scheduler.yaml
- for u2004s01
sudo nano /etc/kubernetes/kubelet.conf
click to show the content
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZ...URS0tLS0tCg==
server: https://192.168.100.41:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:u2004s01
name: system:node:u2004s01@kubernetes
current-context: system:node:u2004s01@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:u2004s01
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
- read the article Installing a Pod network add-on
- they say: Cluster DNS (CoreDNS) will not start up before a network is installed.
- read the article Control Plane Components
- make sure
- CoreDNS modules have pending status
- Control plane components installed
- read the article kubectl: Display one or many resources
- for u2004s01
- read the article kubectl: Display one or many resources
yury@u2004s01:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-p9bhr 0/1 Pending 0 27m <none> <none> <none> <none>
kube-system coredns-64897985d-zcmr6 0/1 Pending 0 27m <none> <none> <none> <none>
kube-system etcd-u2004s01 1/1 Running 0 27m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-apiserver-u2004s01 1/1 Running 0 27m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-controller-manager-u2004s01 1/1 Running 0 27m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-proxy-mb7dt 1/1 Running 0 27m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-scheduler-u2004s01 1/1 Running 0 27m 192.168.100.41 u2004s01 <none> <none>
- for u2004s01
yury@u2004s01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
u2004s01 NotReady control-plane,master 37m v1.23.1 192.168.100.41 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
- read the article Installing a Pod network add-on
- read the article Install Calico with Kubernetes API datastore, 50 nodes or less
- for u2004s01
curl https://docs.projectcalico.org/manifests/calico.yaml -O
- for u2004s01
nano calico.yaml
- from inside nano
- press "^W" and type CALICO_IPV4POOL_CIDR
- uncomment two lines
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
- from inside nano
- modify value as it is shown below and save the file
- this is a range of IP addresses: 192.168.0.1-192.168.63.254
- this does not overlap our local network (192.168.100.1-192.168.100.255)
- this does not overlap serviceSubnet: 10.96.0.0/12
- this is a range of IP addresses: 192.168.0.1-192.168.63.254
- modify value as it is shown below and save the file
value: "192.168.0.0/18"
- for u2004s01
kubectl apply -f calico.yaml
- for u2004s01
yury@u2004s01:~$ free -h
total used free shared buff/cache available
Mem: 7.3Gi 901Mi 4.9Gi 2.0Mi 1.5Gi 6.2Gi
Swap: 0B 0B 0B
yury@u2004s01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
u2004s01 Ready control-plane,master 121m v1.23.1 192.168.100.41 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
yury@u2004s01:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-647d84984b-pwvq6 1/1 Running 0 36m 192.168.26.195 u2004s01 <none> <none>
kube-system calico-node-9flch 1/1 Running 0 36m 192.168.100.41 u2004s01 <none> <none>
kube-system coredns-64897985d-p9bhr 1/1 Running 0 145m 192.168.26.193 u2004s01 <none> <none>
kube-system coredns-64897985d-zcmr6 1/1 Running 0 145m 192.168.26.194 u2004s01 <none> <none>
kube-system etcd-u2004s01 1/1 Running 0 145m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-apiserver-u2004s01 1/1 Running 0 145m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-controller-manager-u2004s01 1/1 Running 0 145m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-proxy-mb7dt 1/1 Running 0 145m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-scheduler-u2004s01 1/1 Running 0 145m 192.168.100.41 u2004s01 <none> <none>
yury@u2004s01:~$ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 146m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 146m
yury@u2004s01:~$ kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
"--service-cluster-ip-range=10.96.0.0/12",
yury@u2004s01:~$ kubectl cluster-info dump | grep -m 1 cluster-cidr
yury@u2004s01:~$
- Note: we have not set --pod-network-cidr for cubeadm init, so it is not possible to show cluster-cidr.
- we do not run kubeadm reset for the u2004s01 after kubeadm init without params
- instead we deployed a new virtual machine with default settings (i.e. openssh is ON)
- u2004s01 192.168.100.41
- Then we went through the steps:
- for u2004s01
- pod-network-cidr: 10.32.0.0/16 range of IP addresses: 10.32.0.1-10.32.255.254
sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16
click to show the response
yury@u2004s01:~$ sudo kubeadm init --service-dns-domain=hardwaycluster.local --pod-network-cidr=10.32.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.hardwaycluster.local u2004s01] and IPs [10.96.0.1 192.168.100.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 61.586215 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4apgbp.mb0o1oaqx4d2zysg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.41:6443 --token 4apgbp.mb0o1oaqx4d2zysg \
--discovery-token-ca-cert-hash sha256:1353dca07ac324902a1356565f59d35f92456e0113b301e9ce4d0e64d9ade678
yury@u2004s01:~$ mkdir -p $HOME/.kube
yury@u2004s01:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
yury@u2004s01:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
yury@u2004s01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
u2004s01 NotReady control-plane,master 2m48s v1.23.1 192.168.100.41 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
- read the article Install Calico with Kubernetes API datastore, 50 nodes or less
- they say: If you are using a different pod CIDR with kubeadm, no changes are required - Calico will automatically detect the CIDR based on the running configuration
- for u2004s01
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
yury@u2004s01:~$ kubectl cluster-info dump | grep -m 1 cluster-cidr
"--cluster-cidr=10.32.0.0/16",
yury@u2004s01:~$ kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
"--service-cluster-ip-range=10.96.0.0/12",
yury@u2004s01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
u2004s01 Ready control-plane,master 10m v1.23.1 192.168.100.41 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
yury@u2004s01:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-647d84984b-9l6b5 1/1 Running 0 3m 10.32.26.194 u2004s01 <none> <none>
kube-system calico-node-7d8nr 1/1 Running 0 3m1s 192.168.100.41 u2004s01 <none> <none>
kube-system coredns-64897985d-qvwsr 1/1 Running 0 10m 10.32.26.195 u2004s01 <none> <none>
kube-system coredns-64897985d-rlq97 1/1 Running 0 10m 10.32.26.193 u2004s01 <none> <none>
kube-system etcd-u2004s01 1/1 Running 0 10m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-apiserver-u2004s01 1/1 Running 0 10m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-controller-manager-u2004s01 1/1 Running 0 10m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-proxy-tsm8n 1/1 Running 0 10m 192.168.100.41 u2004s01 <none> <none>
kube-system kube-scheduler-u2004s01 1/1 Running 0 10m 192.168.100.41 u2004s01 <none> <none>
- we deployed two new virtual machines with default settings (i.e. openssh is ON)
- u2004s02 192.168.100.42
- u2004s03 192.168.100.43
- Then we went through the steps:
- take a look at the responce
- Note: By default, tokens expire after 24 hours.
...
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.41:6443 --token 4apgbp.mb0o1oaqx4d2zysg \
--discovery-token-ca-cert-hash sha256:1353dca07ac324902a1356565f59d35f92456e0113b301e9ce4d0e64d9ade678
- for u2004s01
yury@u2004s01:~$ kubeadm token create
os6p1u.8vnqrfgp8brl03p9
- for u2004s01
yury@u2004s01:~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
1353dca07ac324902a1356565f59d35f92456e0113b301e9ce4d0e64d9ade678
sudo kubeadm join 192.168.100.41:6443 --token os6p1u.8vnqrfgp8brl03p9 --discovery-token-ca-cert-hash sha256:1353dca07ac324902a1356565f59d35f92456e0113b301e9ce4d0e64d9ade678
sudo kubeadm join 192.168.100.41:6443 --token os6p1u.8vnqrfgp8brl03p9 --discovery-token-ca-cert-hash sha256:1353dca07ac324902a1356565f59d35f92456e0113b301e9ce4d0e64d9ade678
click to show the response for u2004s03
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1227 10:27:05.102615 8229 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
- read the article Node ComponentsNode Components
- for u2004s01
yury@u2004s01:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-647d84984b-9l6b5 1/1 Running 1 (13h ago) 13h 10.32.26.196 u2004s01 <none> <none>
kube-system calico-node-69lzz 1/1 Running 0 25m 192.168.100.43 u2004s03 <none> <none>
kube-system calico-node-7d8nr 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
kube-system calico-node-8z6xt 1/1 Running 0 5m25s 192.168.100.42 u2004s02 <none> <none>
kube-system coredns-64897985d-qvwsr 1/1 Running 1 (68m ago) 13h 10.32.26.198 u2004s01 <none> <none>
kube-system coredns-64897985d-rlq97 1/1 Running 1 (68m ago) 13h 10.32.26.197 u2004s01 <none> <none>
kube-system etcd-u2004s01 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
kube-system kube-apiserver-u2004s01 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
kube-system kube-controller-manager-u2004s01 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
kube-system kube-proxy-cws2j 1/1 Running 0 5m25s 192.168.100.42 u2004s02 <none> <none>
kube-system kube-proxy-dddds 1/1 Running 0 25m 192.168.100.43 u2004s03 <none> <none>
kube-system kube-proxy-tsm8n 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
kube-system kube-scheduler-u2004s01 1/1 Running 1 (13h ago) 13h 192.168.100.41 u2004s01 <none> <none>
- kube-proxy:
- kube-proxy-cws2j 192.168.100.42 u2004s02
- kube-proxy-dddds 192.168.100.43 u2004s03
- for u2004s01
yury@u2004s01:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
u2004s01 Ready control-plane,master 13h v1.23.1
u2004s02 Ready <none> 17m v1.23.1
u2004s03 Ready <none> 37m v1.23.1
- read the article Certificate Management with kubeadm
- for u2004s01
yury@u2004s01:~$ cd /etc/kubernetes/pki
yury@u2004s01:/etc/kubernetes/pki$ ls -l
total 60
-rw-r--r-- 1 root root 1289 Dec 30 10:22 apiserver.crt
-rw-r--r-- 1 root root 1155 Dec 30 10:22 apiserver-etcd-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 apiserver-etcd-client.key
-rw------- 1 root root 1679 Dec 30 10:22 apiserver.key
-rw-r--r-- 1 root root 1164 Dec 30 10:22 apiserver-kubelet-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1099 Dec 30 10:22 ca.crt
-rw------- 1 root root 1679 Dec 30 10:22 ca.key
drwxr-xr-x 2 root root 4096 Dec 30 10:22 etcd
-rw-r--r-- 1 root root 1115 Dec 30 10:22 front-proxy-ca.crt
-rw------- 1 root root 1679 Dec 30 10:22 front-proxy-ca.key
-rw-r--r-- 1 root root 1119 Dec 30 10:22 front-proxy-client.crt
-rw------- 1 root root 1675 Dec 30 10:22 front-proxy-client.key
-rw------- 1 root root 1679 Dec 30 10:22 sa.key
-rw------- 1 root root 451 Dec 30 10:22 sa.pub
- for u2004s01
openssl x509 -noout -text -in ca.crt
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in ca.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Dec 30 10:22:21 2021 GMT
Not After : Dec 28 10:22:21 2031 GMT
Subject: CN = kubernetes
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:fd:c4:d1:08:c9:44:dd:97:ac:ef:d7:ae:74:46:
...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment, Certificate Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
3C:61:48:36:9A:E7:3A:1B:02:11:66:CA:BC:D2:C3:7D:FE:64:0E:B6
X509v3 Subject Alternative Name:
DNS:kubernetes
Signature Algorithm: sha256WithRSAEncryption
a1:e4:cf:86:74:47:b7:f5:4b:7a:18:51:17:5f:a8:8f:99:49:
...
- Note: Issuer == Subject
- for u2004s01
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in front-proxy-ca.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = front-proxy-ca
Validity
Not Before: Dec 30 10:22:23 2021 GMT
Not After : Dec 28 10:22:23 2031 GMT
Subject: CN = front-proxy-ca
...
- for u2004s01
yury@u2004s01:/etc/kubernetes/pki$ openssl x509 -noout -text -in apiserver.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2102938930753048706 (0x1d2f23d0cfe01082)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Dec 30 10:22:21 2021 GMT
Not After : Dec 30 10:22:22 2022 GMT
Subject: CN = kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
...
- read the article Check certificate expiration
- read the article Automatic certificate renewal
- read the article Upgrading kubeadm clusters
- The firewall was off
sudo ufw disable
yury@u2004s01:~$ sudo ufw status
Status: inactive
- read the arrticle Ports and Protocols