Deploy Kubernetes cluster using Kubespray - caprivm/virtualization GitHub Wiki
caprivm ([email protected])
Updated: January 2, 2023
This page shows how to install a Kubernetes Cluster, using the open source tool Kubespray. The purpose is to deploy, from a deployment-machine
, a Kubernetes cluster on n
nodes created as virtual machines. The sizing used for the deployment-machine
is:
Feature | Value |
---|---|
OS Used | Ubuntu 22.04 LTS |
vCPU | 2 |
RAM (GB) | 4 |
Disk (GB) | 50 |
Home user | ubuntu |
Kubespray Tag | v2.19.1 |
For practical purposes, in this guide, 3 nodes are configured (1 master node, 2 worker nodes). The nodes configured for the Kubernetes cluster have the following dimensioning. It is important to mention that this dimensioning is hardly sufficient. Any added value can improve cluster performance:
Feature | Value |
---|---|
OS Used | Ubuntu 22.04 LTS |
vCPU | 4 |
RAM (GB) | 8 |
Disk (GB) | 80 |
Home user | ubuntu |
Number of NICs | 2 (ens160, ens192) |
The contents of the page are:
Before executing the step-by-step of this guide, it is important that in the deployment-machine
that you have to install the cluster, you have the cluster management tools installed:
The list of environment variables used for this implementation are summarized in the following exports
:
export NODE_USER="ubuntu"
export KUBESPRAY_TAG="v2.19.1"
export CLUSTER_NAME="myk8s-cluster"
export NODES_HOSTNAME_PREFIX="k8scompute-"
To make this guide more practical, three nodes are deployed in virtual machines with the following IPs:
Node | IP |
---|---|
node-1 | 192.168.1.10 |
node-2 | 192.168.1.20 |
node-3 | 192.168.1.30 |
NOTE: Replace or configure each of the variables according to your environment.
This section shows how to deploy a Kubernetes cluster using open source tools such as Kubespray.
On each of the nodes arranged for the Kubernetes deployment:
sudo apt update && sudo apt upgrade -y
sudo apt install vim
sudo visudo
Add the next line:
# User privilege specification
root ALL=(ALL:ALL) ALL
+ ubuntu ALL=(ALL) NOPASSWD:ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
Considering that you have a deployment-machine
from where you run all the cluster management commands, be sure that this machine reaches all the nodes in your cluster without the need for a password:
ssh-keygen
# Pass the deployment machine public key to each node
ssh-copy-id -i ~/.ssh/id_rsa $NODE_USER@192.168.1.10
ssh-copy-id -i ~/.ssh/id_rsa $NODE_USER@192.168.1.20
ssh-copy-id -i ~/.ssh/id_rsa $NODE_USER@192.168.1.30
# Confirm target doesn't need password and repeat in all the cluster nodes
ssh $NODE_USER@192.168.1.10
ssh $NODE_USER@192.168.1.20
ssh $NODE_USER@192.168.1.30
Create an alias
in your deployment machine for each node so that you can access it more easily:
sudo vi ~/.bashrc
Add the next lines:
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
+ alias k8scompute-1='ssh [email protected]'
+ alias k8scompute-2='ssh [email protected]'
+ alias k8scompute-2='ssh [email protected]'
# Add an "alert" alias for long running commands. Use like so:
Load the new alias:
source ~/.bashrc
Install the Python prerequisites on this machine:
sudo apt install python3-pip
sudo pip3 install --upgrade pip
Clone and configure the Kubespray repository:
# Clone Kubespray repository
cd && git clone https://github.com/kubernetes-sigs/kubespray.git ~/kubespray
cd ~/kubespray && git checkout tags/$KUBESPRAY_TAG -b $KUBESPRAY_TAG
# Install dependencies from requirements.txt
sudo pip3 install -r requirements.txt
# Copy inventory/sample as inventory/$CLUSTER_NAME
cp -rfp inventory/sample inventory/$CLUSTER_NAME
# declare IPs of the ubuntu nodes
declare -a IPS=(192.168.1.10 192.168.1.20 192.168.1.30)
# Update the ansible inventory with the variables related to the nodes
HOST_PREFIX=$NODES_HOSTNAME_PREFIX KUBE_MASTERS=1 CONFIG_FILE=inventory/$CLUSTER_NAME/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Now, to deploy a full cluster, some additional variables may be needed. The configuration file is usually delivered as an external variable to the ansible-playbook
used for cluster deployment. Create a deploy_ansible_vars.yaml
file:
sudo vim ~/kubespray/inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml
Add the next lines:
# Ansible variables
ansible_user: $NODE_USER
bootstrap_os: $NODE_USER
# App variables
helm_enabled: true
kubevirt_enabled: false
local_volume_provisioner_enabled: true
# Networking configuration
kube_network_plugin: calico
kube_network_plugin_multus: true
multus_version: stable
kube_pods_subnet: 10.233.64.0/18
kube_apiserver_port: 6443
kube_network_node_prefix: 24
kube_service_addresses: 10.233.0.0/18
kube_proxy_mode: ipvs
kube_apiserver_node_port_range: 30000-36767
# Kubernetes configuration
kubeadm_control_plane: true
etcd_kubeadm_enabled: true
kube_config_dir: /etc/kubernetes
kube_version: v1.23.7
dashboard_enabled: true
# cGroup configuration
docker_cgroup_driver: systemd
kubelet_cgroup_driver: systemd
# DNS configuration
dns_mode: coredns
resolvconf_mode: host_resolvconf
dns_domain: cluster.local
enable_nodelocaldns: true
nameservers:
- 8.8.8.8
- 8.8.4.4
upstream_dns_servers:
- 8.8.8.8
- 8.8.4.4
# Swap configuration
disable_swap: true
You can check what the configuration file looks like using cat ~/kubespray/inventory/$CLUSTER_NAME/hosts.yml
.
Now deploy the Kubernetes cluster using:
cd ~/kubespray
ansible-playbook -b -i inventory/$CLUSTER_NAME/hosts.yml cluster.yml --user $NODE_USER -e @inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml
Wait until the kubernetes cluster is deployed.
Go to one of the master
nodes with alias (k8scompute-1
) and verify the cluster operation:
# In master node
mkdir -p ~/.kube
sudo cat /root/.kube/config > ~/.kube/config
sudo chown $NODE_USER:$NODE_USER ~/.kube/config
sudo chmod go-r ~/.kube/config
kubectl get pods -A
# NAMESPACE NAME READY STATUS RESTARTS AGE
# kube-system calico-kube-controllers-bd5fc6b6-vcmwt 1/1 Running 0 21h
# kube-system calico-node-9rn2t 1/1 Running 0 21h
# kube-system calico-node-cp6lp 1/1 Running 0 21h
# kube-system calico-node-sdp5k 1/1 Running 0 21h
# kube-system coredns-5c5b9c5cb-gcdfk 1/1 Running 0 21h
# kube-system coredns-5c5b9c5cb-r6frv 1/1 Running 0 21h
# kube-system dns-autoscaler-7979fb6659-kxjbp 1/1 Running 0 21h
# kube-system etcd-k8scompute-1 1/1 Running 0 21h
# kube-system kube-apiserver-k8scompute-1 1/1 Running 1 21h
# kube-system kube-controller-manager-k8scompute-1 1/1 Running 2 (21h ago) 21h
# kube-system kube-proxy-dfdpl 1/1 Running 0 21h
# kube-system kube-proxy-s6l69 1/1 Running 0 21h
# kube-system kube-proxy-tch4k 1/1 Running 0 21h
# kube-system kube-scheduler-k8scompute-1 1/1 Running 2 (21h ago) 21h
# kube-system kubernetes-dashboard-6b49db6997-g4l7v 1/1 Running 0 21h
# kube-system kubernetes-metrics-scraper-5dc755864d-ljdzj 1/1 Running 0 21h
# kube-system nginx-proxy-k8scompute-2 1/1 Running 0 21h
# kube-system nginx-proxy-k8scompute-3 1/1 Running 0 21h
# kube-system nodelocaldns-rvtqx 1/1 Running 0 21h
# kube-system nodelocaldns-vbl7s 1/1 Running 0 21h
# kube-system nodelocaldns-w4sl8 1/1 Running 0 21h
Bring the config
file to the deployment-machine
of one of the master
nodes for cluster management:
# In the deployment machine
mkdir -p ~/.kube
sudo sftp $NODE_USER@192.168.1.10:/home/$NODE_USER/.kube/config ~/.kube/.
sudo chown $NODE_USER:$NODE_USER ~/.kube/config
sudo chmod go-r ~/.kube/config
In some cases is necessary to change the server IP of the ~/.kube/config
file:
- server: https://127.0.0.1:6443
+ server: https://192.168.1.10:6443
NOTE: Make sure it reaches the configured IP (in this case
192.168.1.10
) and port (in this case6443
).
Now in your deployment-machine
you should be able to run any kubectl
or helm
command:
kubectl version --short
# Client Version: v1.23.2
# Server Version: v1.23.1
helm version --short
# v3.7.2+g663a896
Enjoy!
If any of the kubectl
or helm
commands fail, you may have swap
memory configured on the device from which you launch the commands. Check and disable it:
free -m
# total used free shared buff/cache available
# Mem: 3944 251 732 0 2960 3437
# Swap: 0 0 0
sudo swapoff -a
sudo service kubelet restart
kubectl get pods -A