clmng_quick - OpenNebula/cluster-api-provider-opennebula GitHub Wiki
This guide provides a streamlined approach to deploying and managing Kubernetes clusters on OpenNebula using CAPONE (Cluster API Provider for OpenNebula). CAPONE leverages the Kubernetes Cluster API (CAPI) to automate the provisioning of virtual machines (VMs), simplifying cluster lifecycle management.
With CAPONE, you can quickly spin up a Kubernetes cluster on OpenNebula, including:
- Virtual Network Functions (VNFs): A virtual router for external connectivity.
- Control Plane: Manages the cluster and runs the Kubernetes API.
- Worker Nodes: Run containerized workloads.
Before you begin, verify that your environment meets the following requirements:
- OpenNebula: Version ≥ 6.10 with OneGate enabled.
- Docker: Installed and running on your host.
- KinD: Installed for setting up the local management cluster.
- clusterctl: Installed for interacting with the Cluster API.
- kubectl: Installed to manage Kubernetes clusters.
- Helm Installed to deploy the CAPONE Chart
Additionally, ensure that your VM resource allocations meet these recommendations (16GB RAM in the host - 8 vCPUs):
- VNF VM: 512 MB memory, 1 VCPU.
- Kubernetes Nodes: 3 GB memory, 2 VCPU each.
2.1 Install Docker
Follow these steps to install Docker on an Ubuntu-based system:
# Update package lists and install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Set up Docker’s official GPG key and keyring
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc > /dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the Docker repository dynamically based on your Ubuntu version
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update package lists and install the latest Docker version
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Verify installation
docker --version2.2 Install KinD, clusterctl and kubectl
Install KinD:
KinD (Kubernetes in Docker) makes it easy to run a local Kubernetes cluster using Docker container “nodes.”
# Get the latest version dynamically from GitHub
LATEST_KIND_VERSION=$(curl -s https://api.github.com/repos/kubernetes-sigs/kind/releases/latest | grep '"tag_name":' | cut -d '"' -f 4)
# Download and install KinD
curl -Lo ./kind "https://kind.sigs.k8s.io/dl/${LATEST_KIND_VERSION}/kind-linux-amd64"
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Verify installation
kind versionInstall clusterctl:
clusterctl is a CLI tool for managing the lifecycle of Cluster API components.
# Get the latest clusterctl version dynamically from GitHub
LATEST_CLUSTERCTL_VERSION=$(curl -s https://api.github.com/repos/kubernetes-sigs/cluster-api/releases/latest | grep '"tag_name":' | cut -d '"' -f 4)
# Download and install clusterctl
curl -L "https://github.com/kubernetes-sigs/cluster-api/releases/download/${LATEST_CLUSTERCTL_VERSION}/clusterctl-linux-amd64" -o clusterctl
sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
# Verify installation
clusterctl versionInstall kubectl:
On Ubuntu/Debian, you can install kubectl using snap:
sudo snap install kubectl --classic
# Verify the installation
kubectl version --clientBefore deploying CAPONE, a key pre-deployment configuration is required: setting up the network.
A. Create Two Private VNETs in OpenNebula
CAPONE uses two private VNETs:
-
“Public” VNET (NAT-ed):
-
Bridge:
onebr-public -
IPv4 Range:
172.20.0.100 - 172.20.0.199 - Ethernet addresses range: 16
-
Network Address / Mask:
172.20.0.0/255.255.255.0 -
Gateway:
172.20.0.1 - Purpose: Provides NAT-ed access to the internet.
-
Bridge:
-
Private VNET:
-
Bridge:
onebr-private -
IPv4 Range:
10.2.11.100 - 10.2.11.199 -
Network Address / Mask:
10.2.11.0/255.255.255.0 - Purpose: Facilitates internal communication between VMs.
-
Bridge:
B. Set Up Network Bridges on the KVM Host
Before deploying CAPONE, ensure the required bridges exist on your KVM host
There are 2 different ways to do this:
-
Create and Configure Bridges:
# Create the bridges sudo ip link add onebr-public type bridge sudo ip link add onebr-private type bridge # Assign IP addresses to the bridges sudo ip addr add 172.20.0.1/24 dev onebr-public sudo ip addr add 10.2.11.1/24 dev onebr-private # Bring the bridges up sudo ip link set onebr-public up sudo ip link set onebr-private up
-
Persist Bridge Configuration with Netplan:
Edit
/etc/netplan/50-cloud-init.yamlto include:network: version: 2 ethernets: ens2: dhcp4: true # Add this part bridges: onebr-public: interfaces: [] addresses: [172.20.0.1/24] parameters: stp: false forward-delay: 0 dhcp4: no onebr-private: interfaces: [] addresses: [10.2.11.1/24] parameters: stp: false forward-delay: 0 dhcp4: no
Then apply the configuration:
sudo netplan apply
C. Enable NAT and Port Forwarding
To allow VMs on the public network to access the Internet:
-
Enable IP Forwarding:
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
-
Configure NAT with iptables:
sudo iptables -t nat -A POSTROUTING -s 172.20.0.0/24 -o ens2 -j MASQUERADE
-
Persist iptables Rules:
sudo apt install iptables-persistent -y sudo netfilter-persistent save sudo netfilter-persistent reload
D. Keep Bridges Permanent in OpenNebula
After a VM deletion, bridges may be removed. To preserve them:
-
Navigate to
/var/lib/one/remotes/etc/vnmand openOpenNebulaNetwork.conf. -
Set
keep_empty_bridgetotrue. -
As
oneadminuser, run:onehost sync -f
E. Fix OneGate Assigned IP
-
Edit Configuration Files:
- In
/etc/one/oned.conf, set the OneGate Endpoint IP to the public Bridge IP:172.20.0.1. - In
/etc/one/onegate-server.conf, also set thehost IP to the public Brdige IP to172.20.0.1.
- In
-
Restart Services:
sudo systemctl restart opennebula sudo systemctl restart opennebula-gate
-
Verify OneGate Binding (OneGate port is 5030):
ss -tlnp
# .env file
CCM_IMG=ghcr.io/opennebula/cloud-provider-opennebula:latest
CLUSTER_NAME=one
ONE_XMLRPC=http://<OpenNebula Endpoint>:2633/RPC2
ONE_AUTH=oneadmin:<password>
PUBLIC_NETWORK_NAME=public
PRIVATE_NETWORK_NAME=private
# If empty, OpenNebula assigns the IP (from the public network)
CONTROL_PLANE_HOST=
CONTROL_PLANE_MACHINE_COUNT=1
WORKER_MACHINE_COUNT=2Load and verify the environment variables:
set -a
source .env
set +a
env | grep -E 'CCM_IMG|CLUSTER_NAME|ONE_XMLRPC|ONE_AUTH|MACHINE_TEMPLATE_NAME|ROUTER_TEMPLATE_NAME|PUBLIC_NETWORK_NAME|PRIVATE_NETWORK_NAME'-
Create a KinD Configuration File:
Create a file called
kind-config.yamlwith the following content:kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane
-
Deploy the KinD Cluster:
kind create cluster --name one --config kind-config.yaml kubectl cluster-info --context kind-one
-
Create the
clusterctl-config.yaml:providers: - name: opennebula type: InfrastructureProvider url: "https://github.com/OpenNebula/cluster-api-provider-opennebula/releases/download/latest/infrastructure-components.yaml"
-
Initialize the management cluster using
clusterctlwith your configuration file:clusterctl init --config=clusterctl-config.yaml --infrastructure=opennebula:v0.1.7
If you wish to follow the clusterctl approach, please refer to the section Kubernetes Cluster Management with CAPONE. To simplify the deployment, the kubeadm chart will be used in this example:
$ helm repo add capone https://opennebula.github.io/cluster-api-provider-opennebula/charts/
$ helm repo update capone
$ helm pull capone/capone-kadm
$ helm upgrade --install "$CLUSTER_NAME" capone/capone-kadm \
--set ONE_XMLRPC="$ONE_XMLRPC" \
--set ONE_AUTH="$ONE_AUTH" \
--set CLUSTER_NAME="$CLUSTER_NAME" \
--set PUBLIC_NETWORK_NAME="$PUBLIC_NETWORK_NAME" \
--set PRIVATE_NETWORK_NAME="$PRIVATE_NETWORK_NAME" \
--set CONTROL_PLANE_MACHINE_COUNT="$CONTROL_PLANE_MACHINE_COUNT" \
--set WORKER_MACHINE_COUNT="$WORKER_MACHINE_COUNT"Obtain the Workload Cluster kubeconfig:
clusterctl get kubeconfig ${CLUSTER_NAME} > kubeconfig-${CLUSTER_NAME}.yamlFlannel is required to enable pod-to-pod communication across nodes, as Kubernetes clusters lack a default networking solution.
kubectl --kubeconfig=kubeconfig-${CLUSTER_NAME}.yaml apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.ymlCheck the status of your nodes:
kubectl --kubeconfig=kubeconfig-${CLUSTER_NAME}.yaml get nodesYou should see a list similar to:
NAME STATUS ROLES AGE VERSION
one-g89nf Ready control-plane 5m v1.31.4
one-md-0-cmq95-djz7d Ready <none> 4m27s v1.31.4
one-md-0-cmq95-kjrrm Ready <none> 4m12s v1.31.4
With oneadmin user, execute commands directly on the master (control plane) node:
ssh -A -J root@<Public-IP-VR> root@<Private-IP-CP>Then, verify the nodes and pods:
kubectl get nodesA CAPONE-deployed cluster typically consists of:
- Virtual Router (VNF): Provides external connectivity.
- Control Plane (CP): Runs Kubernetes management services.
- Worker Nodes: Execute your containerized applications.
An example mapping might look like:
CAPONE provides an easy way to manage Kubernetes clusters by clearly distinguishing between:
- Management Clusters: Local KinD clusters responsible for deploying and controlling workload clusters.
- Workload Clusters: Kubernetes clusters deployed on OpenNebula infrastructure.
The Management Cluster (KinD) controls the lifecycle of your Kubernetes workload clusters.
To see all KinD management clusters:
kind get clustersTo get more details about a specific cluster (e.g., one):
kubectl cluster-info --context kind-oneThis will provide information about the cluster nodes, including their status, roles, and IPs.
To add another management cluster (e.g., named mgmt-2):
-
Create a KinD configuration file (
management-cluster.yaml):kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane
-
Deploy the new KinD cluster:
kind create cluster --name mgmt-2 --config management-cluster.yaml
To manage a different KinD management cluster (e.g., mgmt-2):
kubectl config use-context kind-mgmt-2To remove an existing KinD management cluster (e.g., mgmt-2):
kind delete cluster --name mgmt-2Workload Clusters are Kubernetes clusters deployed on OpenNebula. Here’s how to manage them:
To list all workload clusters managed by your current KinD Management Cluster:
kubectl get clusters --all-namespacesTo create a new workload cluster (e.g., new-workload):
-
Update the
.envfile with the new cluster name:CLUSTER_NAME=new-workload # Update other variables if necessary -
Load environment variables:
set -a source .env set +a
-
Deploy the workload cluster:
$ helm upgrade --install "$CLUSTER_NAME" capone/capone-kadm \ --set ONE_XMLRPC="$ONE_XMLRPC" \ --set ONE_AUTH="$ONE_AUTH" \ --set CLUSTER_NAME="$CLUSTER_NAME" \ --set PUBLIC_NETWORK_NAME="$PUBLIC_NETWORK_NAME" \ --set PRIVATE_NETWORK_NAME="$PRIVATE_NETWORK_NAME" \ --set CONTROL_PLANE_MACHINE_COUNT="$CONTROL_PLANE_MACHINE_COUNT" \ --set WORKER_MACHINE_COUNT="$WORKER_MACHINE_COUNT"
-
Retrieve the kubeconfig file:
clusterctl get kubeconfig ${CLUSTER_NAME} > kubeconfig-${CLUSTER_NAME}.yaml
Flannel is required for pod-to-pod communication. Deploy it with:
kubectl --kubeconfig=kubeconfig-${CLUSTER_NAME}.yaml apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.ymlEnsure all nodes are up and running:
kubectl --kubeconfig=kubeconfig-${CLUSTER_NAME}.yaml get nodesTo scale up worker nodes (e.g., set to 3 replicas):
kubectl scale machinedeployment ${CLUSTER_NAME}-md-0 --replicas=3Check the updated scaling status:
kubectl get machines -n ${CLUSTER_NAME}To increase control plane nodes (e.g., set to 3 replicas):
kubectl scale kubeadmcontrolplane ${CLUSTER_NAME} --replicas=3Check the updated scaling status:
kubectl get machines -n ${CLUSTER_NAME}To completely remove a workload cluster (e.g., new-workload):
kubectl delete cluster ${CLUSTER_NAME}Verify that the cluster is deleted:
kubectl get clusters --all-namespacesCAPONE can deploy a virtual router as a Load Balancer (LB) to expose Kubernetes services externally.
In an LB-enabled setup, you typically have:
- Control Plane Virtual Router
- Load Balancer Virtual Router: With a dedicated floating IP.
- Kubernetes Control Plane and Worker Nodes: Running on the private network.
For example, your VMs may be mapped as:
| ID | Name | Role | IP Address |
|---|---|---|---|
| 5 | vr-one-lb-0 | Virtual Router (LB) | 172.20.0.201 |
| 4 | one-md-0-* | Worker Nodes | 10.2.11.x |
| 3 | one- | Control Plane | 10.2.11.x |
| 1 | vr-one-cp-0 | Virtual Router (CP) | 172.20.0.200 |
Follow these steps to expose an Nginx deployment via a LoadBalancer service.
Create a file named nginx.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancerApply the application:
kubectl apply -f nginx.yamlCheck that the deployment is running:
kubectl get deploymentsThen retrieve the external IP of the service:
kubectl get servicesThe nginx-service should now display an EXTERNAL-IP (e.g., 172.20.0.201). Test access to ensure your service is reachable.
By following these steps, you can deploy, scale, and manage your Kubernetes clusters on OpenNebula using CAPONE. This modular approach allows for flexibility, easy upgrades, and integration with load balancing for production workloads.