clmng_ops - OpenNebula/cluster-api-provider-opennebula GitHub Wiki
This guide provides detailed instructions for managing Kubernetes clusters using CAPONE. The goal is to offer a clear, step-by-step approach for:
- Initializing a management cluster.
- Creating and managing workload clusters.
- Deleting workload clusters.
It is assumed that the management cluster is already deployed, with the necessary appliances downloaded and networking configured. If you need to set up your local environment from scratch, please refer to the Installation and Requirements section for detailed instructions.
The management cluster can be deployed using different methods, including OneKE or alternatives such as Kind. Access to the cluster is required for all operations, and this needs the kubeconfig file for connecting to the management cluster's Kubernetes API.
If you choose to deploy the management cluster using OneKE, please refer to the Operating OneKE guide for detailed instructions on configuring access to the cluster.
Alternatively, you can deploy the management cluster using Kind. To do so, run the following command from the root directory of this repository:
make ctlptl-apply
This command will create the management cluster and provide you with access to it.
To initialize the management cluster, run the following command:
$ clusterctl init --config=clusterctl-config.yaml --infrastructure=opennebula:v0.1.0
The system will proceed with the installation process:
Fetching providers
Installing cert-manager version="v1.16.1"
Waiting for cert-manager to be available...
Installing provider="cluster-api" version="v1.9.4" targetNamespace="capi-system"
Installing provider="bootstrap-kubeadm" version="v1.9.4" targetNamespace="capi-kubeadm-bootstrap-system"
Installing provider="control-plane-kubeadm" version="v1.9.4" targetNamespace="capi-kubeadm-control-plane-system"
Installing provider="infrastructure-opennebula" version="v0.1.0" targetNamespace="capone-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
Once the management cluster has been successfully initialized, you can proceed with creating your first workload cluster. To begin, you need to set the environment variables for the OpenNebula cluster configuration.
For detailed information on each variable, please refer to the Cluster Definition File Guide Guide.
export CCM_IMG=ghcr.io/opennebula/cloud-provider-opennebula:latest
export CLUSTER_NAME=one
export ONE_XMLRPC=http://<OpenNebula cluster endpoint>:2633/RPC2
export ONE_AUTH=oneadmin:<password>
export CONTROL_PLANE_HOST=
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=2
export MACHINE_TEMPLATE_NAME=capone131
export ROUTER_TEMPLATE_NAME=capone131-vr
export PUBLIC_NETWORK_NAME=public
export PRIVATE_NETWORK_NAME=private
Next, generate the cluster configuration and apply it using kubectl:
clusterctl generate cluster one --infrastructure=opennebula --config=clusterctl-config.yaml | kubectl apply -f -
Retrieve the kubeconfig for the newly created cluster:
clusterctl get kubeconfig one > kubeconfig.yaml
Optionally, you may choose to add a Container Network Interface (CNI) to your workload cluster. For example, to deploy Flannel:
kubectl --kubeconfig <(clusterctl get kubeconfig one) apply -f test/e2e/data/cni/kube-flannel.yml
Finally, verify the nodes in the cluster:
$ kubectl --kubeconfig=kubeconfig.yaml get node
NAME STATUS ROLES AGE VERSION
one-g89nf Ready control-plane 5m v1.31.4
one-md-0-cmq95-djz7d Ready <none> 4m27s v1.31.4
one-md-0-cmq95-kjrrm Ready <none> 4m12s v1.31.4
Initially, check the status of the MachineDeployment. At this point, the one-md-0
machine deployment has 2 replicas, both of which are running and up to date.
$ kubectl get md
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
one-md-0 one 2 2 2 0 Running 8m4s v1.31.4
To add a new node to the cluster, scale the deployment by updating the replica count. Execute the following command to scale the deployment from 2 to 3 replicas:
$ kubectl patch md one-md-0 -n default --type merge -p '{"spec":{"replicas": 3}}'
After applying the patch, verify the updated machine deployment:
$ kubectl get md
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
one-md-0 one 3 3 3 0 Running 13m v1.31.4
Lastly, list the virtual machines to confirm that the new node has been successfully provisioned and integrated into the cluster:
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOST TIME
5 oneadmin oneadmin one-md-0-lcbxd-ghfcq runn 1 3G localhost 0d 00h00
4 oneadmin oneadmin one-md-0-lcbxd-c6kht runn 1 3G localhost 0d 00h11
3 oneadmin oneadmin one-md-0-lcbxd-p5m7q runn 1 3G localhost 0d 00h11
2 oneadmin oneadmin one-qlbtz runn 1 3G localhost 0d 00h13
1 oneadmin oneadmin vr-one-cp-0 runn 1 512M localhost 0d 00h13
Note: The most recent node, one-md-0-lcbxd-ghfcq
, has just been created, while the other nodes, such as one-md-0-lcbxd-c6kht
and one-md-0-lcbxd-p5m7q
, were provisioned earlier. This confirms the successful addition of the new node.
Follow the steps below to safely remove a workload cluster:
-
Verify Cluster Status and Resources
Begin by verifying the status and resources of the cluster. This will provide a detailed description of the cluster's components and their current state.
$ clusterctl describe cluster one NAME READY SEVERITY REASON SINCE Cluster/one True 36m ├─ClusterInfrastructure - ONECluster/one ├─ControlPlane - KubeadmControlPlane/one True 36m │ └─Machine/one-qlbtz True 36m │ └─MachineInfrastructure - ONEMachine/one-qlbtz └─Workers └─MachineDeployment/one-md-0 True 24m └─3 Machines... True 36m
-
Delete the Cluster
To delete the cluster, execute the following command:
kubectl delete cluster one
-
Confirm Cluster Deletion
After the deletion process is complete, confirm that the cluster has been successfully removed by attempting to describe it once more. The expected output should indicate that the cluster no longer exists, and the VMs will have been deleted as well.
$ clusterctl describe cluster one Error: clusters.cluster.x-k8s.io "one" not found