Deploy Kubernetes cluster using Kubespray in Single Machine - caprivm/virtualization GitHub Wiki
caprivm ([email protected])
Updated: April 16, 2024
Content
This page explains how to deploy a Kubernetes Cluster on a single server using Kubespray. As the official documentation of Kubernetes says, Kubespray is another way to deploy a cluster in a Production environment. Broadly speaking, Kubespray is a composition of ansible
playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. You can reach out to the community on Slack channel. It is important to note that, in this case, the same server will serve as control-plane, master and worker (this is the reason why is a single server deployment).
❗ Everything that is going to be executed in this tutorial was done under the
root
user. This is not mandatory and may also be bad practice in certain scenarios. However, the purpose of this guide is more to show you how to install the cluster than to manage the Linux servers where it will be installed.
Before executing the step-by-step of this guide, it is important that in the server that you have to install the cluster, you have the cluster management tools installed:
💡 For the installation of Python and
pip
, it is suggested to use pyenv which is a Python version manager. Don't forget that this installation will be done using theroot
user.
Consider the next minimum requirements before install the Kubernetes Cluster. These were the minimum requirements with which the tutorial described here was tested and carried out.
Feature | Value |
---|---|
OS Used | Ubuntu 22.04 LTS |
vCPU | 8 |
RAM (GB) | 16 |
Disk (GB) | 140 |
Home user | ubuntu |
Number of NICs | 1 (ens160) |
Internet Access | Yes |
Kubespray Version | v2.24.1 |
Python Version | 3.11.6 |
pip Version |
23.2.1 |
The list of the variables used for this tutorial are summarized in the following table. This list of variables allows you to write the step-by-step in a generic way, being able to reference the same folder, file, branch, metric or any value in different sections of the document.
Variable | Value |
---|---|
NODE_USER |
root |
NODE_HOSTNAME_PREFIX |
k8s-node |
NODE_IP |
10.221.242.163 |
KUBESPRAY_TAG |
v2.24.1 |
CLUSTER_NAME |
k8s-cluster |
⚠️ Replace or configure each of the variables according to your environment.
This section shows how to deploy a Kubernetes cluster using open source tools such as Kubespray.
On each of the nodes arranged for the Kubernetes deployment:
sudo apt update -y
sudo apt upgrade -y
sudo apt install vim git -y
sudo visudo
Add the next line:
# User privilege specification
root ALL=(ALL:ALL) ALL
+ $NODE_USER ALL=(ALL) NOPASSWD:ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
Create a key to auto-access the server without using a username or password:
ssh-keygen
# Generating public/private rsa key pair.
# Enter file in which to save the key (/root/.ssh/id_rsa):
# Enter passphrase (empty for no passphrase):
# Enter same passphrase again:
# Your identification has been saved in /root/.ssh/id_rsa.
# Your public key has been saved in /root/.ssh/id_rsa.pub.
# The key fingerprint is:
# SHA256:t0qr5YF1pg9GqE2MvwVkjzamiNYUIvLUsc/KX47znis root@kubernetes
# The key's randomart image is:
# +---[RSA 2048]----+
# | . |
# | . o |
# |o...o o |
# |oo. .B + |
# | ... % S + |
# | .oo X * = . |
# |....= + X . |
# |. .E@ B |
# | =*X.. |
# +----[SHA256]-----+
ssh-copy-id -i ~/.ssh/id_rsa $NODE_USER@$NODE_IP
# /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
# /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
# /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
# [email protected]'s password:
# Number of key(s) added: 1
#
# Now try logging into the machine, with: "ssh '[email protected]'"
# and check to make sure that only the key(s) you wanted were added.
📌 If you are on your
localhost
and the commandssh-copy-id -i ~/.ssh/id_rsa $NODE_USER@$localhost
results in a$NODE_USER@localhost: Permission denied (publickey)
to get around this error run:cat ~/.ssh/id_rsa.pub >> /home/$NODE_USER/.ssh/authorized_keys
and continue with the validation. Don't forget to confirm that target doesn't need password to access to theroot
session in server.
Clone and configure the Kubespray repository.
# Clone Kubespray repository
cd && git clone https://github.com/kubernetes-sigs/kubespray.git ~/kubespray
cd ~/kubespray && git checkout tags/$KUBESPRAY_TAG -b $KUBESPRAY_TAG
# Create a Virtual Environment for Python
python3 -m venv venv
source venv/bin/activate
# Install dependencies from requirements.txt in the Virtual Environment
pip3 install -r requirements.txt
# Copy inventory/sample as inventory/$CLUSTER_NAME
cp -rfp inventory/sample inventory/$CLUSTER_NAME
# Declare IPs of the Kubernetes nodes. In this case, just a single IP.
declare -a IPS=($NODE_IP)
# Update the ansible inventory with the variables related to the nodes
HOST_PREFIX=$NODE_HOSTNAME_PREFIX CONFIG_FILE=inventory/$CLUSTER_NAME/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Now, to deploy a full cluster, some variables may be modified based on the environment. The configuration file is usually delivered as an external variable to the ansible-playbook
used for cluster deployment. Create a deploy_ansible_vars.yaml
file:
vim ~/kubespray/inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml
Add the next lines:
# Ansible variables
ansible_user: $NODE_USER
bootstrap_os: $NODE_USER
# App variables
helm_enabled: true
kubevirt_enabled: false
local_volume_provisioner_enabled: false
# Networking configuration
kube_network_plugin: calico
kube_network_plugin_multus: false
multus_version: stable
kube_pods_subnet: 10.233.64.0/18
kube_apiserver_port: 6443
kube_network_node_prefix: 24
kube_service_addresses: 10.233.0.0/18
kube_proxy_mode: ipvs
kube_apiserver_node_port_range: 30000-32767
# Kubernetes configuration
kubeadm_control_plane: true
etcd_deployment_type: kubeadm
kube_config_dir: /etc/kubernetes
kube_version: v1.28.6
dashboard_enabled: false
# cGroup configuration
container_manager: docker
docker_cgroup_driver: systemd
kubelet_cgroup_driver: systemd
# DNS configuration
dns_mode: coredns
resolvconf_mode: host_resolvconf
dns_domain: cluster.local
enable_nodelocaldns: true
nameservers:
- 8.8.8.8
- 8.8.4.4
upstream_dns_servers:
- 8.8.8.8
- 8.8.4.4
# Swap configuration
disable_swap: true
An example of all the variables you can use to deploy a cluster is found in the path ~/kubespray/inventory/$CLUSTER_NAME/group_vars/all/all.yml
.
💡 If you have your Kubernetes nodes behind a Proxy, you can add it as variables to the
deploy_ansible_vars.yaml
file. Below is an example of what the variables would look like if a Proxy existed in the environment used as reference.# HTTP Proxy http_proxy: "http://my.proxy.com:80" https_proxy: "http://my.proxy.com:80" no_proxy: "10.221.242.163,10.233.64.0/18,10.233.0.0/18,10.221.242.0/24,127.0.0.1,localhost,.cluster.local"
⚠️ Be careful that the subnets defined forpods
andservices
do not coincide with the same subnet of the Kubernetes nodes. It can cause packet routing problems.
Just to verify, you can check what the configuration file looks like using cat ~/kubespray/inventory/$CLUSTER_NAME/hosts.yml
:
all:
hosts:
k8s-node1:
ansible_host: $NODE_IP
ip: $NODE_IP
access_ip: $NODE_IP
children:
kube_control_plane:
hosts:
k8s-node1:
kube_node:
hosts:
k8s-node1:
etcd:
hosts:
k8s-node1:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
Now deploy the Kubernetes cluster using the next command. Remember to have activated your Python virtual environment created in previous steps with the name venv
.
cd ~/kubespray
ansible-playbook --ask-become-pass -b -i inventory/$CLUSTER_NAME/hosts.yml --become-user=$NODE_USER -e @inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml cluster.yml
💡 Including the
--ask-become-pass
flag prompts the user for the password associated with--become-user
on each node in the Kubernetes cluster.
Wait until the Kubernetes cluster is deployed.
After the deployment of the cluster, configure the kubeconfig
file in your server.
# In the server:
mkdir -p ~/.kube
sudo cat /root/.kube/config > ~/.kube/config
sudo chown $NODE_USER:$NODE_USER ~/.kube/config
sudo chmod go-r ~/.kube/config
Verify the status of cluster using kubectl
commands:
kubectl get pods -n kube-system
# NAME READY STATUS RESTARTS AGE
# calico-kube-controllers-bd5fc6b6-vlcxs 1/1 Running 1 (6d4h ago) 6d4h
# calico-node-mlb6h 1/1 Running 1 (6d4h ago) 6d4h
# coredns-5c5b9c5cb-s8jhh 1/1 Running 1 (6d4h ago) 6d4h
# dns-autoscaler-7874cf6bcf-7thct 1/1 Running 1 (6d4h ago) 6d4h
# etcd-k8s-node1 1/1 Running 1 (6d4h ago) 6d4h
# kube-apiserver-k8s-node1 1/1 Running 2 (6d4h ago) 6d4h
# kube-controller-manager-k8s-node1 1/1 Running 2 (6d4h ago) 6d4h
# kube-proxy-2tkcb 1/1 Running 1 (6d4h ago) 6d4h
# kube-scheduler-k8s-node1 1/1 Running 2 (6d4h ago) 6d4h
# kubernetes-metrics-scraper-5dc755864d-qs8kp 1/1 Running 1 (6d4h ago) 6d4h
# nodelocaldns-kpl9d 1/1 Running 1 (6d4h ago) 6d4h
Finally, you should be able to see the Kubernetes version in client and server using:
kubectl version --short
# Client Version: v1.29.2
# Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
# Server Version: v1.28.6
Enjoy Kubernetes! 🚀
This section provides some known bugs in the tool that have been covered before.
When you execute the next commands, it is possible that you will be facing with this issue: Missing sudo password
.
# Commands that you should have executed
cd ~/kubespray
ansible-playbook --ask-become-pass -b -i inventory/$CLUSTER_NAME/hosts.yml --become-user $NODE_USER -e @inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml cluster.yml
If yes, install the sshpass
package and retry the ansible-playbook
command with the -kK
flag.
# Solution
sudo apt update
sudo apt install sshpass
ansible-playbook --ask-become-pass -b -i inventory/$CLUSTER_NAME/hosts.yml -kK --become-user $NODE_USER -e @inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml cluster.yml
If something happens during the installation, the result is not as expected, or simply the pods
of the kube-system
namespace are never Running
, you can reset the cluster installation using the following command. Remember to have activated your Python virtual environment created in previous steps with the name venv
.
cd ~/kubespray
ansible-playbook --ask-become-pass -b -i inventory/$CLUSTER_NAME/hosts.yml --become-user=$NODE_USER -e @inventory/$CLUSTER_NAME/deploy_ansible_vars.yaml reset.yml