Operator Details - turbonomic/kubeturbo GitHub Wiki

Kubeturbo Deploy via an Operator

Review the prerequisites defined here

An Operator is an extension of kubernetes to manage an application's lifecycle using a custom resource

To use this method, you will deploy an operator and configure a custom resource (via a cr yaml file) to deploy kubeturbo.

Are you running OpenShift v4.x+? The best practice will be to use the deployment method via the Operator Hub, and use the certified operator. Documentation here.

The Operator resource definitions provided here will deploy resources via helm chart for the operator and kubeturbo. The steps to deploy are:

  1. Create namespace
  2. Create CRD, Operator service account, role and role-binding
  3. Deploy Operator
  4. Update the Custom Resource manifest (cr.yaml)
  5. Deploy Kubeturbo

Operator Install

Starting with the kubeturbo-operator project here, you can either clone the repo or copy the yamls out of turbonomic/kubeturbo/tree/master/deploy/kubeturbo-operator directory. From either method, create the following resources on your k8s/OCP cluster.

Note:

  • the example is using the turbo namespace and assumes you will set context to this namespace
  • verify the content of the yaml being used as some will require customization
  1. Create a namespace turbo and set context to that namespace/project

    kubectl create namespace turbo
    
  2. If you want to set the new turbo namespace as your default use the command below:

    kubectl config set-context --current --namespace=turbo
    
  3. Setup the CRD

    kubectl create -f https://raw.githubusercontent.com/turbonomic/kubeturbo/master/deploy/kubeturbo-operator/config/crd/bases/charts.helm.k8s.io_kubeturboes.yaml
    
  4. Setup Service Account

    kubectl create -n turbo -f https://raw.githubusercontent.com/turbonomic/kubeturbo/master/deploy/kubeturbo-operator/deploy/service_account.yaml
    
  5. Create a Cluster Role for the operator. For role details review the yaml here

    kubectl create -f https://raw.githubusercontent.com/turbonomic/kubeturbo/master/deploy/kubeturbo-operator/deploy/kubeturbo-operator-cluster-role.yaml
    
  6. Create the operator's Cluster Role Binding (default role reference is kubeturbo-operator). This yaml assumes you are using namespace turbo for the deployment

    kubectl create -f https://raw.githubusercontent.com/turbonomic/kubeturbo/master/deploy/kubeturbo-operator/deploy/role_binding.yaml
    
  7. Deploy the kubeturbo-operator deployment. Update the image tag and, if needed, repo path you want to use

    kubectl create -n turbo -f https://raw.githubusercontent.com/turbonomic/kubeturbo/master/deploy/kubeturbo-operator/deploy/operator.yaml
    

Create Kubeturbo Instance

Edit the custom resource file found in the path deploy/crds/charts_v1alpha1_kubeturbo_cr.yaml with the following values:

  1. version: Turbo_server_version (validate default values)

  2. turboServer: https://Turbo_server_URL

  3. set your Turbo Server credentials based on one option chosen below.

  4. optional: set your desired Kubeturbo RBAC for roleName. See below

  5. targetName: Name_Each_Cluster

  6. supply the desired kubeturbo image if you do not want the default (depending on which branch the operator is from). For more info see below:

    spec:
      image:
        #supply your private repo and specific product version here
        repository: icr.io/cpopen/turbonomic/kubeturbo
        tag: 8.11.2
    
  7. Any other values you need to update (see values table below). Once you have your Custom Resource parameters configured for your environment, create a kubeturbo instance by applying the "kubeturbo" custom resource. The operator will create the kubeturbo product deployment and related resources.

    kubectl apply -f {path}/charts_v1alpha1_kubeturbo_cr.yaml
    

Image Tag

Determine which new tag you will be using by going here: CWOM -> Turbonomic Server -> kubeturbo version and review Releases. You may be instructed by Turbonomic Support to use a new image, or you may want to refresh the image to pick up a patch or new feature.

Turbo Server Credentials

Choose which method to define Turbonomic Server username and password for kubeturbo to use (one or the other, not both)

Note: The operator does not create this secret. Create the secret first before deploying a Kubeturbo instance via CR

Note: This is NOT preferred as it is NOT secure

  • TSC Option: This is only an option if using the Turbonomic SaaS/Secure Client option here you do not need to create or use any Turbonomic server credentials as it uses a secure token instead.

Note: This is the most secure option and one of the advantages of using Kubeturbo with TSC, no Turbonomic server side credentials are needed as it uses a secure token instead of credentials for the Kubeturbo deployment.

Kubeturbo Custom Role

The Kubeturbo mediation probe will require a cluster level role to discover, collect metrics, and if allowed, execute actions. You can control in the CR which role will be used.

  • Option 1: Using the default (cluster-admin) - The default Cluster Role that kubeturbo will use is the builtin cluster-admin Role, use command and file below.

  • Option 2: Using custom role (execute actions) - You can choose to run with a custom Cluster Role that provides minimum privileges with ability to execute actions which is detailed here

  • Option 3: Using custom role (read-only) - You can choose to run with a custom Cluster Role that provides read-only privileges which allows for discovery and metrics to be collected but cannot execute actions which is detailed here

Values

Kubeturbo Operator is a Helm operator. The following table shows the values which can be put into the Kubeturbo Custom Resource to configure the deployment. Refer to the kubeturbo Custom Resource Definition for schema, and you can also see these parameters in the file values.yaml.

Parameter Default Value Required / Opt to Change Parameter Type
image.repository icr.io/cpopen/turbonomic/kubeturbo (IBM Cloud Container Registry) optional path to repo. Must be used with image.tag
image.tag depending on product version optional kubeturbo tag. Must be used with image.repository
image.pullPolicy IfNotPresent optional
image.busyboxRepository busybox optional Busybox repository. This is overridden by cpufreqgetterRepository
image.cpufreqgetterRepository icr.io/cpopen/turbonomic/cpufreqgetter optional Repository used to get node cpufrequency.
image.imagePullSecret optional Define the secret used to authenticate to the container image registry
roleBinding turbo-all-binding-{My_Kubeturbo_name}-{My_Namespace} optional Specify the name of clusterrolebinding
serviceAccountName turbo-user optional Specify the name of the serviceaccount
replicaCount optional Kubeturbo replicaCount
serverMeta.version required number x.y
serverMeta.turboServer required https URL to log into Server
serverMeta.proxy optional format of http://username:password@proxyserver:proxyport or http://proxyserver:proxyport
restAPIConfig.opsManagerUserName required or use k8s secret local or AD user with admin role. value in plain text
restAPIConfig.opsManagerPassword required or use k8s secret admin's password. value in plain text
restAPIConfig.turbonomicCredentialsSecretName required or use opsManagerUserName/password Name of k8s secret that contains the turbo credentials
targetConfig.targetName "Your_k8s_cluster" optional but required for multiple clusters String, how you want to identify your cluster
args.logginglevel 2 optional number. higher number has increased logging.
args.kubelethttps true optional, change to false if k8s 1.10 or older bolean
args.kubeletport 10250 optional, change to 10255 if k8s 1.10 or older number
args.sccsupport optional required for OCP cluster, see here for more details
args.failVolumePodMoves optional Allow kubeturbo to reschedule pods with volumes attached
args.busyboxExcludeNodeLabels optional Do not run busybox on these nodes to discover the cpu frequency with k8s 1.18 and later, default is either of kubernetes.io/os=windows or beta.kubernetes.io/os=windows present as node label
args.stitchuuid true optional, change to false if IaaS is VMM, Hyper-V bolean
args.cleanupSccImpersonationResources true optional cleanup the resources for scc impersonation by default
args.gitEmail optional The email to be used to push changes to git with ArgoCD integration
args.gitUsername optional The username to be used to push changes to git with ArgoCD integration
args.gitSecretName optional The name of the secret which holds the git credentials to be used with ArgoCD integration
args.gitSecretNamespace optional The namespace of the secret which holds the git credentials to be used with ArgoCD integration
agrs.gitCommitMode direct optional The commit mode that should be used for git action executions with ArgoCD Integration. One of request or direct.
HANodeConfig.nodeRoles any value for label key value node-role.kubernetes.io/ for master, etc Optional. Used to automate policies to keep nodes of same role limited to 1 instance per ESX host or AZ (starting with 6.4.3+) values in values.yaml or cr.yaml use escapes, quotes & comma separated. "\"master\"" for Master nodes are by default. Other roles populated by nodeRoles:""foo","bar""
masterNodeDetectors.nodeNamePatterns node name includes .*master.* Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node name. If no match, this is ignored. string, regex used, example: .*master.*
masterNodeDetectors.nodeLabels any value for label key value node-role.kubernetes.io/master Deprecated in kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node label key value pair, If no match, this is ignored. regex used, specify the key as masterNodeDetectors.nodeLabelsKey such as node-role.kubernetes.io/master and the value as masterNodeDetectors.nodeLabelsValue such as .*
daemonPodDetectors.daemonPodNamespaces1 and daemonPodNamespaces2 daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods in the namespace to be ignored for cluster consolidation regex used, values in quotes & comma separated"kube-system","kube-service-catalog","openshift-.*"
daemonPodDetectors.daemonPodNamePatterns daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods matching this pattern to be ignored for cluster consolidation regex used .*ignorepod.*
logging.level 2 Optional Changing the logging level here doesn't require a restart on the pod but takes about 1 minute to take effect

For more on HANodeConfig(or masterNodeDetectors) and daemonPodDetectors go to the YAMLs deploy option wiki page

Working with a Private Repo

If you would like to pull required container images into your own repo, refer to this article here.

Kubeturbo Logging

Kubeturbo sends its log stream to STDERR FD of the container, which is generally available via kubectl logs or to the logging pipeline if configured within the k8s cluster. Kubeturbo additionally by default writes these logs to a file into container local storage at /var/log if the location is mounted as writable file system within the container. If /var/log is not writable within the container, then these log files are written to /tmp. You can also adjust Kubeturbo log verbosity for both diagnostics and gathering more environmental details.

For details on how to collect and configure Kubeturbo Logging go here.

Updating Turbo Server

When you update the Turbonomic or CWOM Server, you will need to update the configMap resource to reflect the new version. NOTE: Starting with kubeturbo 6.3+, you do not need to make this configMap modification if updating to a minor version like 6.3.0 -> 6.3.1, which will now be automatically handled. You would only need to make this change if you are making a major change, i.e. going from 6.3.x -> 6.4.x, or 6.3.x -> 7.x -> 8.x.

  1. After the update, obtain the new Turbo Server version. To get this from the UI, go to Settings -> Updates -> About and use the numeric version such as “8.8” or “8.9” (Build details not required)

  2. You will update the Turbo Server version value. Edit the custom resource file deploy/crds/charts_v1alpha1_kubeturbo_cr.yaml with the following values:

    • version: Turbo_server_version (validate default values)
  3. Apply the change to the operator

    kubectl apply -f deploy/crds/charts_v1alpha1_kubeturbo_cr.yaml
    

    Note you can also kubectl edit kubeturbo kubeturbo-release or edit the running custom resource.

  4. Repeat for every kubernetes / OpenShift cluster with a kubeturbo pod

Updating Kubeturbo Image

You may be instructed to update the kubeturbo pod image. Determine which new tag you will be using by going here: CWOM -> Turbonomic Server -> kubeturbo version and review Releases. You may be instructed by Turbonomic Support to use a new image, or you may want to refresh the image to pick up a patch or new feature.

For Different VERSIONS or a PRIVATE REPO

NOTE: If you need to have a different kubeturbo version to match your Turbo Server, then in the operator instance or custom resource you have configured (kubeturbo-release is default), then you will need to add these parameters into your Custom Resource yaml:

spec:
  image:
    #supply your private repo and specific product version here
    repository: registry.connect.redhat.com/turbonomic/kubeturbo
    tag: 8.9.5
  #rest of CR will be retained
  ...
  1. You will update the kubeturbo image version value. Edit the custom resource file deploy/crds/charts_v1alpha1_kubeturbo_cr.yaml with the following values:

    • tag: Turbo_server_version (validate default values)
    • If you need to refresh an existing tag, use image.pullPolicy of “Always”. Default value is “IfNotPresent”
  2. Apply the change to the operator

    kubectl apply -f deploy/crds/charts_v1alpha1_kubeturbo_cr.yaml
    

    Note you can also kubectl edit kubeturbo kubeturbo-release or edit the running custom resource.

  3. Repeat for every kubernetes / OpenShift cluster with a kubeturbo pod

    Note: Check for changes in configMap parameters to determine if a redeploy is better. Example: when updating from any kubeturbo version prior to 6.4.3, to 6.4.3+, changes from masterNodeDetector to HANodeDetector may be easier with a new kubeturbo deployment.

Updating when using non-default Cluster Role

NOTE: If you are using a non-default Cluster Role such as turbo-cluster-reader or turbo-cluster-admin as of kubeturbo 8.9.5 when you upgrade, you will need to complete the following manual steps to ensure your deployment will use the updated Cluster Role names and to avoid errors in the kubeturbo-operator log post upgrade:

  • Delete all Cluster Roles that start with the names: turbo-cluster-reader and turbo-cluster-admin (there could be a total of 4 of them)
  • Delete Cluster Role Binding that starts with the name: turbo-all-binding-kubeturbo
  • After a few minutes the kubeturbo-operator will automatically re-create the required non-default Cluster Role and Cluster Role Binding using the new names.

There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.