oneke_cloud_controller - OpenNebula/one-apps GitHub Wiki

OpenNebula Cloud Provider

By default, our OneKE distribution uses RKE2 cloud controller manager (which at the same is the same used in k3s) for tasks like setting nodes internal/external IPS, etc.

If we want to enable OpenNebula Kubernetes Node Metadata management and provide custom Load Balancer support in our OneKE cluster, we can use the OpenNebula cloud provider.

Deploying the OpenNebula Cloud Provider in the OneKE cluster

[!NOTE] The following feature is only available for OneKE > v1.31

Disabling the RKE2 cloud controller in OneKE

In order to use the OpenNebula Cloud Provider we should first deploy the OneKE cluster disabling the default RKE2 cloud controller manager for avoiding potential conflicts. We can do that by setting the parameter ONEAPP_RKE2_CLOUD_CONTROLLER_ENABLED to NO in the OneKE service template custom attributes:

   {
       [...]
       "custom_attrs_values": {
           "ONEAPP_RKE2_CLOUD_CONTROLLER_ENABLED": "NO",
           [...]
       }
       [...]
   }

Once you have instantiated the service and deployed the OneKE cluster, you could check through kubectl that no cloud-controller-manager pod is running in the kube-system namespace.

LoadBalancer controller prerequisites

In order to manage LoadBalancer services with the OpenNebula cloud provider, we must ensure that we have available an vnet Address Range of type "ETHER" in the AR index 1 of the public vnet, with at least 1 available address:

oneadmin@one-frontend:~$ onevnet addar "public" <<EOF
AR = [
    TYPE="ETHER",
    SIZE=2
]
EOF

More details in the ONE Cloud Provider documentation.

Also, we need to have the CAPONE VNF Router image and template defined, that will be used by the ONE Cloud Provider for instantiating the VRouter instance that will be backing our Load Balancer service (using the previously created AddressRange IP as client-facing endpoint).

We can download and export the CAPONE Virtual Router appliance template through the OpenNebula Community Marketplace.

oneadmin@one-frontend:~# onemarketapp export 'CAPONE VNF' 'CAPONE VNF' -d 1
IMAGE
    ID: 0
VMTEMPLATE
    ID: 0

oneadmin@one-frontend:~# onetemplate list
  ID USER     GROUP    NAME                 REGTIME
   0 oneadmin oneadmin CAPONE VNF           01/03 09:11:22

oneadmin@one-frontend:~# oneimage list
  ID USER     GROUP    NAME                 DATASTORE     SIZE TYPE PER STAT RVMS
   0 oneadmin oneadmin CAPONE VNF           default         2G OS    No rdy     0

Generating and applying the OpenNebula Cloud Controller manifests

Now, we are prepared to deploy the OpenNebula cloud provider into our cluster. We have the install instructions available in the cloud-provider-opennebula wiki. In our case, we are interested in using the OneKE kustomize templates.

In order to parameterize the kustomize templates we need to create a ./.env file with the necessary content. We are going to use the one reference in the cloud provider wiki. Note that the ROUTER_TEMPLATE_NAME parameter should reference the VR template created in the previous step (in our example we named it capone131-vr too, so you can the sample .env file content than in the wiki, remember to update it accordingly), e.g.:

CLUSTER_NAME=test
CCM_IMG=ghcr.io/opennebula/cloud-provider-opennebula:latest
ONE_XMLRPC=http://192.168.150.1:2633/RPC2
ONE_AUTH=oneadmin:**REDACTED**
ROUTER_TEMPLATE_NAME="CAPONE VNF"
PUBLIC_NETWORK_NAME=public
PRIVATE_NETWORK_NAME=private

For generating and applying the kustomize templates for OneKE you should follow the Using Kustomize section instructions. Basically, we should download kustomize and envsusbst and call them from the cloud-provider-opennebula repo root directory (ensure that you have cloned it through git and you have placed the .env file referenced in the previous step in the repo root directory):

kustomize build kustomize/oneke/ | (export `cat .env` && envsubst) | kubectl apply -f-

Once you have applied the manifests, you can check if the OpenNebula Cloud Controller is correctly running checking the cloud-controller-manager DaemonSet pods and their logs:

oneadmin@one-frontend$:~$ kubectl get pods -n kube-system -l k8s-app=cloud-controller-manager
NAME                             READY   STATUS    RESTARTS   AGE
cloud-controller-manager-z2pcp   1/1     Running   0          108s

oneadmin@one-frontend$:~# kubectl -n kube-system logs cloud-controller-manager-z2pcp
I1209 12:09:41.446482       1 serving.go:386] Generated self-signed cert in-memory
I1209 12:09:41.766274       1 serving.go:386] Generated self-signed cert in-memory
W1209 12:09:41.766397       1 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1209 12:09:42.161616       1 controllermanager.go:160] Version: v0.0.0-master+$Format:%H$
I1209 12:09:42.165479       1 secure_serving.go:213] Serving securely on [::]:10258
I1209 12:09:42.165714       1 leaderelection.go:254] attempting to acquire leader lease kube-system/cloud-controller-manager...
[...]
I1209 12:09:42.174295       1 leaderelection.go:268] successfully acquired lease kube-system/cloud-controller-manager
I1209 12:09:42.175889       1 event.go:389] "Event occurred" object="kube-system/cloud-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="oneke-ip-192-168-200-101_a19fe630-3485-41f5-b4d1-69a166a67504 became leader"
W1209 12:09:42.179628       1 controllermanager.go:306] "node-route-controller" is disabled
I1209 12:09:42.180845       1 controllermanager.go:329] Started "cloud-node-controller"
I1209 12:09:42.181473       1 node_controller.go:176] Sending events to api server.
I1209 12:09:42.181564       1 node_controller.go:185] Waiting for informer caches to sync
I1209 12:09:42.182132       1 controllermanager.go:329] Started "cloud-node-lifecycle-controller"
I1209 12:09:42.182612       1 node_lifecycle_controller.go:112] Sending events to api server
I1209 12:09:42.183522       1 controllermanager.go:329] Started "service-lb-controller"
[...]

Testing Load Balancer services

Once we have installed the OpenNebula Cloud Provider and created all the necessary resources, we can deploy a Load Balancer service in OneKE in order to let the ONE Cloud Provider create the associated VirtualRouter instance and balance the traffic into your service.

We can test that using a simple Load Balancer service that uses an nginx instance pod as backend:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80

Once we have deployed both resources, we can check the cloud-controller-manager logs for checking the configuration reconciliation:

I1209 12:22:10.444361       1 event.go:389] "Event occurred" object="default/nginx-service" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1209 12:22:10.472440       1 loadbalancer.go:406] EnsureLoadBalancer(): asd
I1209 12:22:10.550974       1 event.go:389] "Event occurred" object="default/nginx-service" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

At this point, we should be able to see the associated VNF router instance (in our case vr-test-lb-0) through the frontend:

oneadmin@one-frontend$:~$ onevm list
  ID USER     GROUP    NAME                              STAT  CPU     MEM HOST                         TIME
   4 oneadmin oneadmin vr-test-lb-0                      runn    1    512M 86.109.5.87              0d 00h00
   3 oneadmin oneadmin worker_1_(service_1)              runn    2      3G 86.109.5.87              0d 00h58
   2 oneadmin oneadmin worker_0_(service_1)              runn    2      3G 86.109.5.87              0d 00h58
   1 oneadmin oneadmin master_0_(service_1)              runn    2      3G 86.109.5.87              0d 00h58
   0 oneadmin oneadmin vnf_0_(service_1)                 runn    1    512M 86.109.5.87              0d 00h59

And finally we can test the load balancer making a request through the router public IP:

oneadmin@one-frontend:~# kubectl get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
kubernetes      ClusterIP      10.43.0.1       <none>            443/TCP          41m
nginx-service   LoadBalancer   10.43.125.110   192.168.150.101   8080:30312/TCP   7s

oneadmin@one-frontend:~# curl http://192.168.150.101:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

...

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
⚠️ **GitHub.com Fallback** ⚠️