Node Role Scaling Policies and HA - turbonomic/kubeturbo GitHub Wiki

Node Roles having implicit and explicit meaning for not only scheduling workloads, but also whether nodes that are part of these roles should be deleted, provisioned, and for virtualized Nodes how to spread these Nodes across ESX Hosts to maintain High Availability.

Turbonomic allows a user to automate or manually set policies based on Node Role that control node scaling, and node placement on ESX Hosts.

Default settings will prevent any node with the role of master from suspending (deleting), and also automatically set a VM to ESX Host placement policy to separate Master Nodes on different hosts for HA.

Node Groups by Role Out of the Box

Turbonomic policies work by assigning a policy to a group. Turbonomic conveniently creates Node Groups by Role out of the box. To see these groups you can go to Search -> Groups and the groups will have a naming convention of NodeRole-{role}-Kubernetes-{targetName/clusterName}. See screenshot below:

Scenario 1: Node Horizontal Scaling Based on Node Role (Manual)

This method allows a user to create node scaling policies in Turbonomic to control whether a node based on role can have an action generated to delete/suspend or provision. These action policies control both realtime and plan scenarios. Nodes are modeled as Virtual Machines in Turbonomic. To create a policy go to Settings -> Policy -> select Automation type -> Virtual Machine.

In a Virtual Machine policy, do the following:

  1. name your policy
  2. select the Scope based on the VM Group that is created for you that starts with NodeRole.
  3. expand the Automation and Orchestration. Add an Action
  4. under Action Type, select Suspend
  5. For Action Generation, select Do not Generate Actions

Save the Policy and it will take effect in 10 - 20 minutes. Your final policy should look like this:

Scenario 2: Node High Availability Policies with Scaling (Automated)

A new feature introduced in Turbonomic 6.3, is the ability to consolidate clusters for efficiency while respecting that certain node roles should never suspend. These nodes, such as control plane, master, or infra nodes may also need to run in a High Availability configuration . We provide you a way to designate how your master nodes are identified, since we want to prevent suspending masters that are for HA. Turbonomic will also automate creation of a policy that will keep nodes by role evenly distributed across VMware ESX hosts or Cloud Availability Zones, to maintain HA resilience.

To see the automated policies created for you, go to Settings -> Policies and select Imported Placement Policies. You will see policies prefixed with [HA] which means that this NodeRole group for this k8s cluster will have the following behavior:

  • all nodes that are a member of this group will not suspend
  • if these nodes are virtualized, and then the placement of these nodes will enforce an HA configuration to evenly distribute nodes across ESX Hosts image

To automate policies that will evenly distribute nodes by role across ESX hosts or Zones and maintain HA resilience, you will see within the configMap a parameter called HANodeConfig. You can use the nodeRoles parameter to specify which node roles you want to apply an HA policy to. Provide a value for label key value node-role.kubernetes.io/ (master, app, infra etc.)

NOTE: Default setting is only master.

        },
        "HANodeConfig": {
            "nodeRoles": [ "master", "infra"]
        }

HELM Method

In the Helm install or upgrade command set the following parameter: HANodeConfig.nodeRoles="master","infra"

Values support regex and can be expressed in quotes & comma separated "master" (default),"worker","app". Note values in values.yaml require escapes. Add more roles using format "\"foo\"\,\"bar\"".

Operator Method

In the Custom Resource for the Kubeturbo instance, provide the following parameter:

spec:
  HANodeConfig:
    nodeRoles: "\"foo\"\,\"bar\""

Add more roles using format "\"foo\"\,\"bar\"". Values in cr.yaml use escapes, quotes & comma separated.