OpenShift Operator Hub Details - turbonomic/kubeturbo GitHub Wiki

Kubeturbo Deploy via OpenShift Operator Hub

Topics in this Article include:

  1. Prerequisites and Steps to Deploy Kubeturbo via the OpenShift Operator Hub
  2. How to optionally specify a Kubeturbo Image or Repo
  3. Values to be used in the Custom Resource
  4. Updating Kubeturbo

Review the prerequisites defined here.

An Operator is an extension of kubernetes to manage an application's lifecycle using a custom resource.

If you are running OpenShift v4.x on Linux on either x86, Power platforms, or LinuxONE, you can leverage deploying Kubeturbo via the Operator Hub. To use this method, you will deploy the certified Kubeturbo Operator and configure a custom resource to deploy a single instance of Kubeturbo per cluster. This doc will highlight steps to take in the OCP console.

NOTE: When deploying with the Operator you do NOT need to manually create the Service Account, Role or Cluster Role Binding (CRB) as the Operator takes care of all of that for you (which is the advantage of using the Operator for deploying KubeTurbo)

If you are running OpenShift in a disconnected network, and want to deploy any certified operator, first follow the steps for configuring OLM on restricted networks based on the version of OCP you are running.

The steps to deploy are:

  • Make sure all prerequisites are reviewed and completed first list here
  • Turbonomic Server Credentials as detailed here
  • Create a project in the OCP cluster where Kubeturbo will be deployed
  • Deploy the Kubeturbo Operator from OperatorHub
  • Create an instance, using either the Form or Yaml method
  • Validate the target in the Turbonomic Server UI

Create a Project

As a cluster administrator, create a project for Kubeturbo.

Select that project, and then deploy an operator here.

Deploy the Operator from OperatorHub

Search for and use only the Certified Kubeturbo operator (note, do not use Community or Marketplace edition)

Select, and you will perform a Basic Install (only option). Confirm the version you want to use. The Kubeturbo probe image version can be changed after deployment.

Click Install, and set up the operator for Manual, to be installed in a SINGLE namespace. Automatic update is not recommended if you are not also automatically updating the Turbo Server.

Note: Stable channel is the Generally Available tested product. Beta channel will be a pre-release candidate, and should not be used unless instructed.

The operator will install

and when ready you can view the operator to then create an instance, which will be your Kubeturbo probe that will monitor and manage this cluster.

Configure a Kubeturbo Instance

When viewing the deployed operator, create an instance:

To configure this instance you can use either the FORM or YAML input option.

The minimum parameters to be defined for Kubeturbo when using the FORM option are listed here, but you have a few options to decide on first You have the following options for Cluster Roles and Cluster Role Binding

  • Option 1: Using the default (cluster-admin) - The default Cluster Role that kubeturbo will use is the builtin cluster-admin Role.

  • Option 2: Using custom role (execute actions) - You can choose to run with a custom Cluster Role that provides minimum privileges with ability to execute actions which is detailed here

  • Option 3: Using custom role (read-only) - You can choose to run with a custom Cluster Role that provides read-only privileges which allows for discovery and metrics to be collected but cannot execute actions which is detailed here

Choose where and how to store Turbonomic Server username and password for kubeturbo to use (one or the other, not both)**

Note: The operator does not create this secret. Create the secret first before deploying a Kubeturbo instance via CR

Note: This is NOT preferred as it is NOT secure

  • TSC Option: This is only an option if using the Turbonomic SaaS/Secure Client option here you do not need to create or use any Turbonomic server credentials as it uses a secure token instead.

Note: This is the most secure option and one of the advantages of using Kubeturbo with TSC, no Turbonomic server side credentials are needed as it uses a secure token instead of credentials for the Kubeturbo deployment.

Update the spec: with your require options below:

spec:

  • restAPIConfig:
    • Ops Manager Password and Ops Manager User Name in plain text if you choose Option 2 above
    • or Turbonomic Credentials Secret Name if you choose Option 1 above (No need to specify if using default secret name is turbonomic-credentials)
    • To configure the credentials with a k8s secret go here
  • serverMeta:
    • turboServer:
    • version:
  • targetConfig:
    • targetName:
  • args:
    • `sccsupport: ("*" is defined by default)

The minimum parameters required to be defined for Kubeturbo via YAML are summarized here:

spec:
  restAPIConfig:
    turbonomicCredentialsSecretName: "your_custom_secret_name"
    #opsManagerPassword: yourPWD
    #opsManagerUserName: yourName
  serverMeta:
    turboServer: https://your.turbonomic.io
    version: "8.9"
  targetConfig:
    targetName: YourClusterName
  args:
    sccsupport: "*"

Note other parameters are presented, but the default values will work for you. If you need to adjust or specify other parameters see the values table below.

Specifying a Kubeturbo Version

For Different VERSIONS or a PRIVATE REPO Note: The Kubeturbo operator will by default deploy a Kubeturbo product image that matches the deployed operator version, and pull directly from the IBM Container Registry (icr.io) (or prior to 8.7.5 from the Red Hat Container Catalog repo). Since it is important that the Kubeturbo product version should match your Turbo Server version, and if your Turbo Server is on a version different from the default one deployed with the operator, then you will want the flexibility to specify the exact Kubeturbo product version you need.

Specific Kubeturbo version: If you need to specify a Kubeturbo image that is different from the Kubeturbo operator so that you can match your Turbo Server, then you can define this in the operator instance or custom resource you have configured (default is kubeturbo-release). Add these parameters into your Custom Resource yaml under spec as shown:

spec:
  image:
    # Supply your private repo and specific product version here
    # With v8.7.5 and newer, kubeturbo is available via IBM Container Registry
    repository: icr.io/cpopen/turbonomic/kubeturbo
    tag: 8.9.0
    # Specify your cpu frequency job container image.
    #cpufreqgetterRepository: icr.io/cpopen/turbonomic/cpufreqgetter
    # Note cpufreqgetter will use the same pull secret as set for the kubeturbo container image
    #imagePullSecret: secretName

  #Uncomment out `args` and the parameters you want to control the nodes the cpu frequency job runs on (string input)
  #args:
  #  busyboxExcludeNodeLabels: 'kubernetes.io/key=value'

  #Rest of CR will be retained
  ...

For more details on working with a Private Repo refer this article here.

When you have applied your configuration you will see that you have created an instance, or custom resource, called kubeturbo-release, with a Kubeturbo deployed at either the same version as the operator, or a user specified version:

Validate the Deployment

You will now see two deployments and two running pods in the namespace. One is the operator, and the other is the Kubeturbo probe (release)

Look at the log of the kubeturbo-release pod to validate that the probe has successfully connected, registered with the Turbo Server, and a full discovery has occurred.

Values

Kubeturbo Operator is a Helm operator. The following table shows the values which can be put into the Kubeturbo Custom Resource to configure the deployment. Refer to the kubeturbo Custom Resource Definition for schema, and you can also see these parameters in the file values.yaml.

Parameter Default Value Required / Opt to Change Parameter Type
image.repository icr.io/cpopen/turbonomic/kubeturbo optional path to repo. Must be used with image.tag
image.tag depending on product version optional kubeturbo tag. Must be used with image.repository
image.pullPolicy IfNotPresent optional
image.busyboxRepository icr.io/cpopen/turbonomic/cpufreqgetter:1.0 optional full path to repo, image and tag.
image.cpufreqgetterRepository icr.io/cpopen/turbonomic/cpufreqgetter optional Repository used to get node cpufrequency.
image.imagePullSecret optional Define the secret used to authenticate to the container image registry
roleName cluster-admin optional Specify custom turbo-cluster-reader or turbo-cluster-admin role instead of the default cluster-admin role
roleBinding turbo-all-binding optional Specify the name of clusterrolebinding
serviceAccountName turbo-user optional Specify the name of the serviceaccount
serverMeta.version required number x.y
serverMeta.turboServer required https URL to log into Server
serverMeta.proxy optional format of http://username:password@proxyserver:proxyport or http://proxyserver:proxyport
restAPIConfig.opsManagerUserName required or use k8s secret local or AD user with admin role. values are in plain text
restAPIConfig.opsManagerPassword required or use k8s secret admin's password. values are in plain text
restAPIConfig.turbonomicCredentialsSecretName turbonomic-credentials required only if using secret and not taking default secret name secret that contains the turbo server admin user name and password
targetConfig.targetName "Your_k8s_cluster" optional but required for multiple clusters String, how you want to identify your cluster
args.logginglevel 2 optional number. higher number increases logging
args.kubelethttps true optional, change to false if k8s 1.10 or older bolean
args.kubeletport 10250 optional, change to 10255 if k8s 1.10 or older number
args.stitchuuid true optional, change to false if IaaS is VMM, Hyper-V bolean
args.pre16k8sVersion false optional if Kubernetes version is older than 1.6, then add another arg for move/resize action
args.cleanupSccImpersonationResources true optional cleanup the resources for scc impersonation by default
args.sccsupport optional required for OCP cluster, see here for more details
HANodeConfig.nodeRoles any value for label key value node-role.kubernetes.io/ for master, etc Optional. Used to automate policies to keep nodes of same role limited to 1 instance per ESX host or AZ (starting with 6.4.3+) values in values.yaml or cr.yaml use escapes, quotes & comma separated. Master nodes are by default. Other roles populated by nodeRoles:""foo","bar""
masterNodeDetectors.nodeNamePatterns node name includes .*master.* Deprecated in Kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node name. If no match, this is ignored. string, regex used, example: .*master.*
masterNodeDetectors.nodeLabels any value for label key value node-role.kubernetes.io/master Deprecated in Kubeturbo v6.4.3+. Used in 6.3-6.4.2 to avoid suspending masters identified by node label key value pair, If no match, this is ignored. regex used, specify the key as masterNodeDetectors.nodeLabelsKey such as node-role.kubernetes.io/master and the value as masterNodeDetectors.nodeLabelsValue such as .*
daemonPodDetectors.daemonPodNamespaces1 and daemonPodNamespaces2 daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods in the namespace to be ignored for cluster consolidation regex used, values in quotes & comma separated"kube-system","kube-service-catalog","openshift-.*"
daemonPodDetectors.daemonPodNamePatterns daemonSet kinds are by default allow for node suspension. Adding this parameter changes default. Optional but required to identify pods matching this pattern to be ignored for cluster consolidation regex used .*ignorepod.*

For more on HANodeConfig(or masterNodeDetectors) and daemonPodDetectors go to the YAMLs deploy option wiki page

Kubeturbo Logging

For details on how to collect and configure Kubeturbo Logging go here.

Updating Kubeturbo Image

When Turbonomic releases a new product version, or if you are instructed to update the Kubeturbo pod image, follow these instructions. Note the version of Kubeturbo that you will use must match the version of your Turbonomic Server.

Update Steps

Step 1: Go to the installed Operator, and update the Kubeturbo Operator based on your Subscription (automatic or approval required). The Kubeturbo Operator is given the same version as the Kubeturbo product version.

Step 2: If your Kubeturbo release was installed with default values, then there is nothing else you need to do. Verify that the kubeturbo-release pod has restarted, and confirm the new version in the Turbonomic UI (Settings -> Target Configuration and click on the target that corresponds to this OCP Cluster. Repeat for every kubernetes / OpenShift cluster with a Kubeturbo probe.

Step 3: If you are using a non-default Cluster Role and your original version of the kubeturbo operator deployed was before 8.9.5 you will need to follow the manual steps here

Custom VERSIONS or a PRIVATE REPO

Note: If you have explicitly specified a Kubeturbo version in the kubeturbo-release custom resource (kubeturbo-release is default), then you will need to add update the image parameters to upgrade your installation. See this custom resource yaml as an example:

spec:
  image:
    #supply your private repo and specific product version here
    repository: registry.connect.redhat.com/turbonomic/kubeturbo
    tag: 8.7.0
  #rest of CR will be retained
  ...
  1. You will update the Kubeturbo image version value by editing the kubeturbo-release with the following values:

    • tag: Turbo_server_version (validate default values)
    • If you need to refresh an existing tag, use image.pullPolicy of “Always”. Default value is “IfNotPresent”
  2. Apply the change

  3. Repeat for every Kubernetes / OpenShift cluster with a Kubeturbo probe.


There's no place like home... go back to the Turbonomic Wiki Home or the Kubeturbo Deployment Options.