Prerequisites - turbonomic/kubeturbo GitHub Wiki
NOTE: All Kubeturbo deployment related documentation is now in the official IBM Docs here. This GitHub wiki is no longer being updated, please refer to the official IBM Docs going forward.
The Turbonomic platform gathers information from your Kubernetes / OpenShift environment through the Kubeturbo container image that is deployed into the Kubernetes(k8s)/OpenShift(OCP) cluster you want to manage. This guide assumes that the Turbonomic Server is already up and running with a valid license applied.
The kubeturbo container images are available through the public repo called IBM Container Registry or ICR as of Turbonomic Product version 8.7.5 and higher. Kubeturbo will run as a single pod deployment, with the following resources:
- Namespace or Project (default is turbo)
- Service Account
- Role binding defined
- ConfigMap with updated information to connect to the Turbonomic Server
- Deployment of kubeturbo (Yaml, Operator, OperatorHub (OpenShift), Helm)
- Kubeturbo can be deployed into many versions of Kubernetes. Supported Kubernetes and OpenShift versions include:
- OpenShift release 3.11 and Red Hat supported GA versions of OpenShift 4.x,
- Kubernetes version 1.21 up to latest supported GA version
- any k8s upstream compliant distribution including any managed Kubernetes environments (for example Rancher, AKS, EKS, GKE, IKS, ROKS, ROSA, etc....)
- Supported Architectures: x86, Power (ppc64le), LinuxONE (os390)
- Requires a linux node to run Kubeturbo on.
- Kubeturbo by default will deploy on any schedulable non-control plane node, such worker / app / agent / compute node roles.
- Starting with Turbo v8.8.6, container images will not be compatible with dockershim container runtimes
- One Kubeturbo, like any other Turbonomic probe, can communicate with only one Turbonomic Server. One Kubeturbo pod will be deployed per cluster (control plane). Where an environment has more than one Turbo Server you will want to configure one Kubeturbo instance per server, as defined in the ConfigMap. Multiple Kubeturbo instances can occupy the same namespace, or create separate namespaces.
- Turbonomic Server is in place
- Any Turbonomic Server, whether SaaS, OVA based, deployed on any Kubernetes/OCP cluster, or Cisco CWOM.
- Turbo Server version should be EQUAL to or higher (by N+1) than the Kubeturbo probe version
- Running Cisco Intersight Workload Optimizer (aka IWO)? Refer only to the IWO Target Configuration Guide - Cloud Native Targets
- Turbonomic Server criteria: running, with a Trial or Premium license, and gather the following information:
- Turbonomic Server URL used to access the Turbo UI:
https://<TurboIPaddressOrFQDN>
- Turbonomic username and password
- administrator or site administrator role
- AD user supported when Turbo Server is integrated with AD
- Running on SaaS or using multifactor authentication (MFA)? Turbo user needs to be local
- NOTE these credentials can be provided in a Kubernetes secret. For details refer to your deployment method.
- Turbonomic Server Version. To get this from the UI, go to Settings -> Updates -> About and use the numeric version such as “8.3” (No minor version needed)
- Kubeturbo image tag should match the Turbo Server version. For more details refer to Turbonomic - CWOM - Kubeturbo version mappings
- Turbonomic Server URL used to access the Turbo UI:
- Access and Permissions to create all the resources required:
- User deploying Kubeturbo needs cluster-admin cluster role level access to be able to create the following resources: namespace and cluster role binding for the service account.
- Kubeturbo pod will run with a service account with a cluster-admin role. Least privileged custom role options are shown here.
Refer to the Figure above “Turbonomic and Kubernetes Network Detail”.
- Instructions assume the node you are deploying to has internet access to pull the Kubeturbo image from the ICR repository, or your environment is configured with a private repo. Refer to more details on working with a private repo here.
- Kubeturbo probe container image:
- ICR or IBM Container Registry using
icr.io/cpopen/turbonomic/kubeturbo:<version>
for version 8.7.5 and higher - Images for version 8.7.4 and older are available from either Docker hub or RedHat Container Catalog
- ICR or IBM Container Registry using
- Kubeturbo operator container image (if applicable):
- ICR or IBM Container Registry using
icr.io/cpopen/kubeturbo-operator:<version>
for version 8.7.5 and higher - Images for version 8.7.4 and older are available from either Docker hub or RedHat Container Catalog
- ICR or IBM Container Registry using
- CPU Frequency container image (also known as busybox):
- ICR or IBM Container Registry using
icr.io/cpopen/turbonomic/cpufreqgetter
- For more information on parameters associated with this job, go to the article here
- ICR or IBM Container Registry using
- Kubeturbo probe container image:
For details on how to configure your deployment for a private repo, read this article.
- Kubeturbo pod will require access to the kubelet on every node and to the apiserver
- Kubelet Network: https + port=10250 (default).
- Kubeturbo pod will have https/tcp and wss access to the Turbonomic Server
- Proxies between Kubeturbo and Turbonomic Server need to allow websocket communication.
Kubeturbo deploys by default without limits or requests set. Our recommendation is to use as is.
If you must set limits/requests, then the amount of resources required for Kubeturbo is related to the number of Workloads and the number of pods being managed.
- Workload is defined as a unique workload controller, such as deployment foo, statefulset bar, etc.
- When you deploy Kubeturbo, you can identify the number of workloads by going to the Supply Chain for a single k8s cluster, and identify the number in the Workload Controller entity
Use the following table as a guide for setting Memory Limits:
Number of Pods | Number of Workload Controllers | Recommended memory limit in Gi |
---|---|---|
5K | 2.5K | 4 Gi |
5K | 5K | 4 Gi |
10K | 5K | 6 Gi |
10K | 10K | 6.5 Gi |
20K | 10K | 9.2 Gi |
20K | 20K | 12 Gi |
30K | 15K | 13 Gi |
30K | 30K | 16 Gi |
Note
- To avoid throttling, do not set a CPU Limit
- Memory Requests if needed can be set to 1 GB.
- CPU Requests if needed can be set to 1 core
- To configure Kubeturbo container spec limits and requests, refer to your preferred deployment method for details
Pick your preferred deployment options. Click here to view all 4 options, or click on one of the methods below to deploy:
There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.