dev_env - OpenNebula/cluster-api-provider-opennebula GitHub Wiki
This section walks through the process of creating a simple workload cluster. The deployment consists of two main steps:
-
Deploy the Management Cluster:
We will use ctlptl to quickly set up a Kubernetes cluster. In this cluster, we will install the OpenNebula Cluster API provider (CAPONE). -
Deploy the Workload Cluster and Install Flannel:
Once the management cluster is running, we will create a workload cluster. To enable pod networking, we will install Flannel as the Container Network Interface (CNI). Flannel is a simple VXLAN overlay network that allows communication between pods across different nodes.
These steps are automated with several Makefile targets for your convenience.
Important
To use the ctlptl project you need to configure Docker in your machine
Before proceeding, gather the following information:
-
OpenNebula API endpoint: If you are using the front-end to create the cluster, you can set this to
127.0.0.1
. -
OpenNebula credentials: A user account and password with access to the necessary resources, including CAPONE disk images, templates, and virtual networks. In the examples below, we use
oneadmin
. - Resource names: The names of the OpenNebula resources, which should match those used during installation.
- Cluster node count: The number of control plane and worker nodes in your cluster.
Create a file named .env
in your working directory with the following information:
Important
Replace the placeholder values (<>
) with your specific installation details.
CCM_IMG=ghcr.io/opennebula/cloud-provider-opennebula:latest
CLUSTER_NAME=one
ONE_XMLRPC=http://<OpenNebula Endpoint>:2633/RPC2
ONE_AUTH=oneadmin:<password>
MACHINE_TEMPLATE_NAME=capone131
ROUTER_TEMPLATE_NAME=capone131-vr
PUBLIC_NETWORK_NAME=public
PRIVATE_NETWORK_NAME=private
# If empty, OpenNebula assigns the IP (from the public network)
CONTROL_PLANE_HOST=
CONTROL_PLANE_MACHINE_COUNT=1
WORKER_MACHINE_COUNT=2
Below is a complete description of all the variables:
Variable Name | Description |
---|---|
CCM_IMG |
The container image for the OpenNebula Kubernetes Cloud Provider. |
CLUSTER_NAME |
Name of the cluster to be deployed. |
ONE_XMLRPC |
OpenNebula XML-RPC endpoint URL. |
ONE_AUTH |
Authentication credentials for OpenNebula (user:password ). |
MACHINE_TEMPLATE_NAME |
Name of the OpenNebula template used for K8s nodes. |
ROUTER_TEMPLATE_NAME |
Name of the OpenNebula template used for VNF. |
PUBLIC_NETWORK_NAME |
Name of the public network where external connectivity is available. |
PRIVATE_NETWORK_NAME |
Name of the private network for internal communication between nodes. |
CONTROL_PLANE_HOST |
IP address assigned to the control plane node in the public network. |
CONTROL_PLANE_MACHINE_COUNT |
Number of control plane nodes. |
WORKER_MACHINE_COUNT |
Number of worker nodes in the cluster. |
In the root directory of this repository, execute the following command:
make ctlptl-apply
Next, run the following to initialize the management cluster:
make clusterctl-init-full
To generate the workload cluster, simply run:
make one-apply
Finally, configure Flannel in the workload cluster:
make one-flannel
First, let’s verify the VMs created for the cluster. You should see two worker nodes (one-md-0-rlbnk-*
), one control plane (one-8l7c6
) and one virtual router (vr-one-cp-0
)
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOST IP TIME
4 oneadmin oneadmin one-md-0-rlbnk-74ngl runn 1 3G localhost 10.2.11.104 0d 00h02
3 oneadmin oneadmin one-md-0-rlbnk-rtx2k runn 1 3G localhost 10.2.11.103 0d 00h02
2 oneadmin oneadmin one-8l7c6 runn 1 3G localhost 10.2.11.102 0d 00h04
1 oneadmin oneadmin vr-one-cp-0 runn 1 512M localhost 10.2.11.101 0d 00h04
Tip
You can use the Sunstone graphical user interface to list and query these VMs.
You can now gather the public IP of the virtual router (CONTROL_PLANE_HOST
) by inspecting the details of the vr-one-cp-0
VM, and the private IP of the control plane by looking at the details of the one-8l7c6
VM. In our example, these IPs are:
- Virtual Router (public IP):
172.20.0.200
- Control Plane (private IP):
10.2.11.102
To retrieve the public IP of the virtual router, run the following command (replace 1 with the appropriate VM ID):
$ onevm show 1
As a final step, let's check the status of the workload cluster from a Kubernetes perspective. First, you need to grab the kubeconfig
file. Using the IPs obtained earlier, replace the IP address with the ones from your setup:
$ install -d ~/.kube/
$ scp -J [email protected] [email protected]:/etc/kubernetes/admin.conf ~/.kube/config
Once the kubeconfig
file is in place, you can now check the status of the cluster by inspecting the pods and nodes:
$ kubectl get nodes,pods -A
NAME STATUS ROLES AGE VERSION
node/one-md-0-2mhsn-tbt7w Ready <none> 7m7s v1.31.4
node/one-md-0-2mhsn-zz8mw Ready <none> 7m22s v1.31.4
node/one-s8qck Ready control-plane 7m54s v1.31.4
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel pod/kube-flannel-ds-fzx7q 1/1 Running 0 119s
kube-flannel pod/kube-flannel-ds-h7vbq 1/1 Running 0 119s
kube-flannel pod/kube-flannel-ds-jxb2t 1/1 Running 0 119s
kube-system pod/cloud-controller-manager-lbkbm 1/1 Running 1 (7m9s ago) 7m47s
kube-system pod/coredns-7c65d6cfc9-85k6x 1/1 Running 0 7m46s
kube-system pod/coredns-7c65d6cfc9-nndtz 1/1 Running 0 7m46s
kube-system pod/etcd-one-s8qck 1/1 Running 0 7m53s
kube-system pod/kube-apiserver-one-s8qck 1/1 Running 0 7m53s
kube-system pod/kube-controller-manager-one-s8qck 1/1 Running 0 7m53s
kube-system pod/kube-proxy-9xg5q 1/1 Running 0 7m22s
kube-system pod/kube-proxy-kzwxf 1/1 Running 0 7m47s
kube-system pod/kube-proxy-p4bwx 1/1 Running 0 7m7s
kube-system pod/kube-scheduler-one-s8qck 1/1 Running 0 7m53s