U1.49 Ubuntu Quick Start (QS): Kubernetes test bench on premises. - chempkovsky/CS2WPF-and-CS2XAMARIN GitHub Wiki
-
Note: In our future activities, we will not pay much attention to the quorum of control plane nodes. But we will be interested in the quorum of worker nodes. So, we create Single Control Plane-cluster.
- To get detailed information related to the creation of such a cluster (step-by-step instructions), read the article U1.47 Ubuntu Quick Start (QS): Kubernetes with kubeadm and Docker on premises. Single control plane
- To get detailed information related to the creation of HA cluster-cluster (step-by-step instructions), read the article U1.48 Ubuntu Quick Start (QS): Kubernetes with kubeadm and Docker on premises. HA cluster.
-At Hyper-V side it was set for every virtual machine:
- Settings/memory/Ram = 2048
- Settings/memory/Enable Dynamic memory = Yes
- Settings/memory/Minimum ram = 2048
- Settings/memory/Maximum ram = 1048576
- Settings/Number of vrtual processors = 2
- We plan to use virtual machines as follows
-
u2004s01 192.168.100.61
- Control Plane Node
-
u2004s02 192.168.100.62
- Worker node
-
u2004s03 192.168.100.63
- Worker node
-
u2004s04 192.168.100.64
- Worker node
-
u2004s01 192.168.100.61
- --pod-network-cidr = 10.32.0.0/16
- serviceSubnet=10.96.0.0/12
- cluster name = testcluster.local
- Pod network add-on=calico
sudo kubeadm init --service-dns-domain=testcluster.local --pod-network-cidr=10.32.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
click to show the response of kubeadm init
yury@u2004s01:~$ sudo kubeadm init --service-dns-domain=testcluster.local --pod-network-cidr=10.32.0.0/16
[sudo] password for yury:
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.testcluster.local u2004s01] and IPs [10.96.0.1 192.168.100.61]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.61 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost u2004s01] and IPs [192.168.100.61 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.006920 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node u2004s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2xnc9g.utoa78k1xcks6vz6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
sudo kubeadm join 192.168.100.61:6443 --token 2xnc9g.utoa78k1xcks6vz6 \
--discovery-token-ca-cert-hash sha256:7a51575ae9d7154f6fc9990f2ed9fb2a9cfddfb1a9ba77ba2e65ba816d38f9a0
sudo kubeadm join 192.168.100.61:6443 --token 2xnc9g.utoa78k1xcks6vz6 \
--discovery-token-ca-cert-hash sha256:7a51575ae9d7154f6fc9990f2ed9fb2a9cfddfb1a9ba77ba2e65ba816d38f9a0
click to show cluster information
yury@u2004s01:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
u2004s01 Ready control-plane,master 57m v1.23.1 192.168.100.61 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
u2004s02 Ready <none> 50m v1.23.1 192.168.100.62 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
u2004s03 Ready <none> 49m v1.23.1 192.168.100.63 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
u2004s04 Ready <none> 48m v1.23.1 192.168.100.64 <none> Ubuntu 20.04.3 LTS 5.4.0-91-generic docker://20.10.12
yury@u2004s01:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-647d84984b-4xv98 1/1 Running 0 55m 10.32.26.193 u2004s01 <none> <none>
kube-system calico-node-2pc66 1/1 Running 0 55m 192.168.100.61 u2004s01 <none> <none>
kube-system calico-node-bclwf 1/1 Running 0 48m 192.168.100.64 u2004s04 <none> <none>
kube-system calico-node-cl4lg 1/1 Running 0 49m 192.168.100.63 u2004s03 <none> <none>
kube-system calico-node-r752m 1/1 Running 0 51m 192.168.100.62 u2004s02 <none> <none>
kube-system coredns-64897985d-86jg5 1/1 Running 0 57m 10.32.26.195 u2004s01 <none> <none>
kube-system coredns-64897985d-lp8p8 1/1 Running 0 57m 10.32.26.194 u2004s01 <none> <none>
kube-system etcd-u2004s01 1/1 Running 0 57m 192.168.100.61 u2004s01 <none> <none>
kube-system kube-apiserver-u2004s01 1/1 Running 0 57m 192.168.100.61 u2004s01 <none> <none>
kube-system kube-controller-manager-u2004s01 1/1 Running 0 57m 192.168.100.61 u2004s01 <none> <none>
kube-system kube-proxy-cdl5n 1/1 Running 0 57m 192.168.100.61 u2004s01 <none> <none>
kube-system kube-proxy-jzcgp 1/1 Running 0 49m 192.168.100.63 u2004s03 <none> <none>
kube-system kube-proxy-mg2fj 1/1 Running 0 48m 192.168.100.64 u2004s04 <none> <none>
kube-system kube-proxy-vmgfh 1/1 Running 0 51m 192.168.100.62 u2004s02 <none> <none>
kube-system kube-scheduler-u2004s01 1/1 Running 0 57m 192.168.100.61 u2004s01 <none> <none>
yury@u2004s01:~$ kubectl get services -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58m <none>
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 58m k8s-app=kube-dns
- read the article Installing Helm
- for u2004s01 only
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
- Step-by-step instructions for NFS Dynamic Persistent Volumes Provisioner U1.34 Ubuntu Quick Start (QS): NFS persistent storage on Kubernetes on premises
- Step-by-step instructions for Ceph Dynamic Persistent Volumes Provisioner U1.40 Ubuntu Quick Start (QS): Kubernetes and Ceph clusters on premises. RDB
- Step-by-step instructions for deploying a Ceph cluster U1.39 Ubuntu Quick Start (QS): Ceph cluster
- read the article Local Path Provisioner
- read the article Multiple Local Path Provisioners in the same cluster
- read the article Installing the Chart
- read the article Multiple Local Path Provisioners in the same cluster
- to deploy first provisioner
- for u2004s01 only
git clone https://github.com/rancher/local-path-provisioner.git
cp local-path-provisioner/deploy/chart/values.yaml local-path-provisioner/deploy/chart/first-values.yaml
nano local-path-provisioner/deploy/chart/first-values.yaml
- we set in the first-values.yaml-file
...
storageClass:
...
provisionerName: rancher.io/first-local-path
...
name: first-local-path
...
nodePathMap:
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
paths:
- /opt/first-local-path-provisioner
click to show first-values.yaml
# Default values for local-path-provisioner.
replicaCount: 1
image:
repository: rancher/local-path-provisioner
tag: v0.0.21
pullPolicy: IfNotPresent
helperImage:
repository: busybox
tag: latest
defaultSettings:
registrySecret: ~
privateRegistry:
registryUrl: ~
registryUser: ~
registryPasswd: ~
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
## For creating the StorageClass automatically:
storageClass:
create: true
## Set a provisioner name. If unset, a name will be generated.
provisionerName: rancher.io/first-local-path
## Set StorageClass as the default StorageClass
## Ignored if storageClass.create is false
defaultClass: false
## Set a StorageClass name
## Ignored if storageClass.create is false
name: first-local-path
## ReclaimPolicy field of the class, which can be either Delete or Retain
reclaimPolicy: Delete
# nodePathMap is the place user can customize where to store the data on each node.
# 1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in
# DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
# 2. If one node is listed on the nodePathMap, the specified paths will be used for provisioning.
# 1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
# 2. If more than one path was specified, the path would be chosen randomly when provisioning.
#
# The configuration must obey following rules:
# 1. A path must start with /, a.k.a an absolute path.
# 2. Root directory (/) is prohibited.
# 3. No duplicate paths allowed for one node.
# 4. No duplicate node allowed.
nodePathMap:
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
paths:
- /opt/first-local-path-provisioner
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
nodeSelector: {}
tolerations: []
affinity: {}
configmap:
# specify the config map name
name: local-path-config
# specify the custom script for setup and teardown
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
# specify the custom helper pod yaml
helperPod: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
# Number of provisioner worker threads to call provision/delete simultaneously.
# workerThreads: 4
# Number of retries of failed volume provisioning. 0 means retry indefinitely.
# provisioningRetryCount: 15
# Number of retries of failed volume deletion. 0 means retry indefinitely.
# deletionRetryCount: 15
- to create Provisioner
- for u2004s01 only
kubectl create namespace first-local-path-storage
helm install first-local-path-storage local-path-provisioner/deploy/chart/ --namespace first-local-path-storage -f local-path-provisioner/deploy/chart/first-values.yaml
click to show how to delete Provisioner
-
Note: to delete Provisioner
- for u2004s01 only
helm delete first-local-path-storage --namespace first-local-path-storage
kubectl delete namespace first-local-path-storage
-
to deploy second provisioner and default storage class
- for u2004s01 only
git clone https://github.com/rancher/local-path-provisioner.git
cp local-path-provisioner/deploy/chart/values.yaml local-path-provisioner/deploy/chart/second-values.yaml
nano local-path-provisioner/deploy/chart/second-values.yaml
- we set in the second-values.yaml-file
...
storageClass:
...
provisionerName: rancher.io/second-local-path
...
name: second-local-path
...
defaultClass: true
...
nodePathMap:
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
paths:
- /opt/second-local-path-provisioner
click to show second-values.yaml
# Default values for local-path-provisioner.
replicaCount: 1
image:
repository: rancher/local-path-provisioner
tag: v0.0.21
pullPolicy: IfNotPresent
helperImage:
repository: busybox
tag: latest
defaultSettings:
registrySecret: ~
privateRegistry:
registryUrl: ~
registryUser: ~
registryPasswd: ~
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
## For creating the StorageClass automatically:
storageClass:
create: true
## Set a provisioner name. If unset, a name will be generated.
provisionerName: rancher.io/second-local-path
## Set StorageClass as the default StorageClass
## Ignored if storageClass.create is false
defaultClass: true
## Set a StorageClass name
## Ignored if storageClass.create is false
name: second-local-path
## ReclaimPolicy field of the class, which can be either Delete or Retain
reclaimPolicy: Delete
# nodePathMap is the place user can customize where to store the data on each node.
# 1. If one node is not listed on the nodePathMap, and Kubernetes wants to create volume on it, the paths specified in
# DEFAULT_PATH_FOR_NON_LISTED_NODES will be used for provisioning.
# 2. If one node is listed on the nodePathMap, the specified paths will be used for provisioning.
# 1. If one node is listed but with paths set to [], the provisioner will refuse to provision on this node.
# 2. If more than one path was specified, the path would be chosen randomly when provisioning.
#
# The configuration must obey following rules:
# 1. A path must start with /, a.k.a an absolute path.
# 2. Root directory (/) is prohibited.
# 3. No duplicate paths allowed for one node.
# 4. No duplicate node allowed.
nodePathMap:
- node: DEFAULT_PATH_FOR_NON_LISTED_NODES
paths:
- /opt/second-local-path-provisioner
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
nodeSelector: {}
tolerations: []
affinity: {}
configmap:
# specify the config map name
name: local-path-config
# specify the custom script for setup and teardown
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
# specify the custom helper pod yaml
helperPod: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
# Number of provisioner worker threads to call provision/delete simultaneously.
# workerThreads: 4
# Number of retries of failed volume provisioning. 0 means retry indefinitely.
# provisioningRetryCount: 15
# Number of retries of failed volume deletion. 0 means retry indefinitely.
# deletionRetryCount: 15
- to create Provisioner
- for u2004s01 only
kubectl create namespace second-local-path-storage
helm install second-local-path-storage local-path-provisioner/deploy/chart/ --namespace second-local-path-storage -f local-path-provisioner/deploy/chart/second-values.yaml
click to show how to delete Provisioner
-
Note: to delete Provisioner
- for u2004s01 only
helm delete second-local-path-storage --namespace second-local-path-storage
kubectl delete namespace second-local-path-storage
- Here is a result
yury@u2004s01:~$ kubectl get storageClass --all-namespaces -o wide
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
first-local-path rancher.io/first-local-path Delete WaitForFirstConsumer true 10m
second-local-path (default) rancher.io/second-local-path Delete WaitForFirstConsumer true 25s
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
- here is a result
yury@u2004s01:~$ kubectl get pods -o wide -n portainer
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
portainer-685c4f4bfc-2vvsj 1/1 Running 0 108s 10.32.121.130 u2004s04 <none> <none>
- with the browser at the hyper-v side eight urls are available
http://192.168.100.61:30777/
http://192.168.100.62:30777/
http://192.168.100.63:30777/
http://192.168.100.64:30777/
https://192.168.100.61:30779/
https://192.168.100.62:30779/
https://192.168.100.63:30779/
https://192.168.100.64:30779/
click to show how the summary
yury@u2004s01:~$ kubectl get sc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
first-local-path rancher.io/first-local-path Delete WaitForFirstConsumer true 3d23h
second-local-path (default) rancher.io/second-local-path Delete WaitForFirstConsumer true 3d23h
yury@u2004s01:~$ kubectl get sc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
first-local-path rancher.io/first-local-path Delete WaitForFirstConsumer true 3d23h
second-local-path (default) rancher.io/second-local-path Delete WaitForFirstConsumer true 3d23h
yury@u2004s01:~$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-ecad1077-2bb5-4835-9629-9fc8af9dc690 10Gi RWO Delete Bound portainer/portainer second-local-path 3d23h
yury@u2004s01:~$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
portainer portainer Bound pvc-ecad1077-2bb5-4835-9629-9fc8af9dc690 10Gi RWO second-local-path 3d23h