capi_quick - aleixrm/one-apps GitHub Wiki
This quick start guide provides instructions for using the CAPI appliance, a Rancher integration that enables users to manage OpenNebula CAPI RKE2 clusters directly from the Rancher web interface.
-
Download the Appliance: Retrieve the Capi appliance from the OpenNebula marketplace using the following command:
$ onemarketapp export 'Service Capi' Capi --datastore default
-
(Optional) Configure the Capi VM Template: Depending on your specific requirements, you may need to modify the VM template to adjust resources such as
vCPU
orMEMORY
. -
Instantiate the Template For this guide, the template will be launched using the default user inputs, so no additional parameters need to be specified. Ensure that the network provides Internet access to the VM.
$ onetemplate instantiate Capi --nic VNET_ID
After instantiating the CAPI appliance with the default configuration—without specifying a custom hostname or user inputs—the Rancher interface can be accessed by navigating to https://<VM-IP>.sslip.io
in a web browser. Since no password was set during launch, the default credentials are:
- Username: admin
- Password: capi1234
Once logged in, the interface will reflect the successful deployment by displaying the K3s management cluster as active and OpenNebula listed as a recognized infrastructure provider. This last point can be verified in the Cluster Management section under CAPI and Providers, where a list of installed providers is shown.
If you prefer to monitor the process manually, you can access the VM using onevm ssh <vmid>
and run kubectl get pods -A
to observe the bootstrapping progress. Occasionally, a bug related to the installation of Turtles — specifically with the helm-install-rancher-turtles pod — may cause the installation to hang. In such cases, it is recommended to restart the process.
Note
Please note that the complete configuration process can take between 6 and 8 minutes with the default resources.
To deploy an OpenNebula RKE2 cluster, several approaches are available. One option is to manually declare the workload cluster using the Import YAML functionality within the Cluster Management interface.
Alternatively, and as chosen for this guide, the recommended method is to utilize the Helm charts provided by OpenNebula for both Kubeadm and RKE2. In this case, the RKE2 option will be used. To locate the appropriate chart, navigate to the Charts section of the management cluster and search for capone. Two charts will be displayed, select capone-rke2 for this deployment.
Once selected, click Install. In the next step, you can specify the namespace where the resources will be created, as well as an optional name for the cluster. If no name is provided, one will be autogenerated based on the Helm release name. In this example, the default namespace is used.
Finally, click Next and scroll to the bottom of the YAML configuration that is displayed. In this section, you only need to edit the parameters necessary to adapt the deployment to your environment. It is important to note that there is no need to manually import the appliances related to CAPONE — the only requirement is that the public and private networks specified in the cluster definition already exist. For additional information about the available parameters, please refer to the CAPONE repository's wiki.
A copy of the values used in this example is shown below:
CONTROL_PLANE_HOST: null
CONTROL_PLANE_MACHINE_COUNT: 1
KUBERNETES_VERSION: v1.31.4
MASTER_TEMPLATE_NAME: '{{ .Release.Name }}-master'
ONE_AUTH: oneadmin:opennebula
ONE_XMLRPC: http://172.20.0.1:2633/RPC2
PRIVATE_NETWORK_NAME: private
PUBLIC_NETWORK_NAME: service
ROUTER_TEMPLATE_NAME: '{{ .Release.Name }}-router'
WORKER_MACHINE_COUNT: 1
WORKER_TEMPLATE_NAME: '{{ .Release.Name }}-worker'
After a few minutes, the application will appear as Deployed, indicating that the cluster has been successfully provisioned.
The deployment can also be verified from the OpenNebula CLI by observing that three new virtual machines have been instantiated. These correspond to the virtual router, worker, and master nodes of the newly deployed RKE2 cluster, as well as the CAPi appliance itself.
$ onevm list
ID USER GROUP NAME STAT CPU MEM HOST IP TIME
3 oneadmin oneadmin capone-rke2-0-1747308076-md-0-hbnj8-jv9rp runn 1 3G localhost 192.168.150.9 0d 00h03
2 oneadmin oneadmin capone-rke2-0-1747308076-sd7nc runn 1 3G localhost 192.168.150.8 0d 00h06
1 oneadmin oneadmin vr-capone-rke2-0-1747308076-cp-0 runn 1 512M localhost 192.168.150.7 0d 00h06
0 oneadmin oneadmin Capi-2966 runn 2 8G localhost 172.20.0.16 0d 00h49
To manage the workload cluster from the Rancher UI, it must first be imported into Rancher. Navigate to Cluster Management and select the newly created cluster from the Clusters section. Once selected, different options for importing the cluster into Rancher will be displayed. In this case, since a self-signed certificate is being used, the second option, which involves using curl and kubectl, should be selected.
This command can either be executed directly within the Rancher UI or by connecting via SSH to the control plane node of the workload cluster. This time, the process will be completed entirely through the UI. To begin, access the ‘Kubectl Shell’ of the management cluster, which provides a terminal interface for executing commands within the cluster.
Before running the import command, the kubeconfig file for the workload cluster must be retrieved. This can be done with the following command:
$ kubectl get secrets <cluster-name>-kubeconfig -o jsonpath="{.data.value}" | base64 -d > one-kubeconfig
Once the kubeconfig has been obtained, the Rancher import command can be executed using:
$ curl --insecure -sfL https://<import-yaml> | kubectl apply --kubeconfig one-kubeconfig -f -
Once the cluster has been imported, it becomes fully accessible from the Rancher UI, where it is displayed alongside the K3s cluster. From the interface, it is possible to perform tasks such as installing Helm charts, executing kubectl commands, and even upgrading the Kubernetes version of the cluster.
The following sections provide guidance on managing the workload cluster through the Rancher UI. Topics covered include installing Longhorn, creating a Persistent Volume Claim (PVC) with Longhorn, deploying an NGINX instance that utilizes the previously created volume, and adding a worker node to the cluster. These procedures demonstrate essential operations for day-to-day Kubernetes cluster management within Rancher.
Longhorn can be installed from the Apps section of the workload cluster, which provides access to a marketplace of available Helm charts. Once the installation is complete, a new Longhorn tab will appear in the cluster’s side menu. This tab allows direct access to the Longhorn UI, enabling full management of the storage system from within Rancher.
- Create a PVC: The Persistent Volume Claim (PVC) is created using the Rancher UI. Navigate to the Storage section of the target cluster, then go to Persistent Volume Claims and click Create.
- Create a Nginx Deployment: To demonstrate that resources can also be defined directly using YAML, this step uses the Import YAML option (accessible via the arrow icon in the upper-right corner of the Rancher UI). The following YAML definition is imported directly to create a simple NGINX deployment that mounts the previously created PVC.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: http
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: "/persistent/"
name: nginx
volumes:
- name: nginx
persistentVolumeClaim:
claimName: nginx
- Create a LoadBalancer Service: A LoadBalancer-type Service can also be defined in the same way to expose the recently deployed NGINX application externally. Once created, the external IP assigned to the service can be viewed directly from the Rancher UI.
Regarding the addition of worker nodes to the cluster, this can be done by updating the CAPI Machine deployments in the Cluster Management section under the CAPI tab, setting the desired replica count. Once specified, the cluster will be updated accordingly. It is worth noting that, in addition to defining and updating the cluster, it can also be deleted from the CAPI Clusters tab.
Finally, the cluster is upgraded to the latest available version, v1.32.4, by navigating to the Cluster Management section. From the three-dot menu of the workload cluster, select Edit Config. In the Basics section, choose the desired version for the update.