Actions and Handling Special Cases - turbonomic/kubeturbo GitHub Wiki
This article will cover use cases around actions that will be considered special cases and have different behaviors than default. Action types include Rightsizing, Moves, Scaling.
Rightsizing containerized workloads special cases
Turbonomic will have out-of-the-box behavior treatment for resizing workloads based on the following scenarios:
- Side-cars and resizing automation
- Operator-controlled workloads and resizing automation
- System namespace workloads and generating actions
Node Provision and Suspend special cases
Side Cars Recommend Only
Side-cars are typically injected, and the limits/requests are controlled by another workload controller, and not the parent of the pod that the side car was injected into. Therefore an action to update a parent controller would fail because the side-car spec is not there. To handle this, Turbonomic will automatically discover side-car container specs, create a group, and set a default policy to have resize actions to recommend only. This allows the resizing of other container specs that are not side cars to execute.
- Group name: "Injected Sidecars/All ContainerSpecs" (one per cluster)
- Fefault policy name: "Injected Sidecars/All ContainerSpecs Resize Recommend Only [k8s cluster name]"
Operator Controlled Workloads Recommend Only
Starting in 8.10.6 KubeTurbo will auto-create Container Spec groups for workloads controlled by an Operator by default and create a new default policy that sets those resize actions to recommend only so they cannot be automated. Additionally if someone tries to create a policy to automate those resize actions we prevent/block that policy from being created.
- Group name: "Operator Controlled ContainerSpecs" (one per cluster)
- Default policy name: "Operator Controlled ContainerSpecs Resize Recommend Only [k8s cluster name]"
If you want to allow specific workloads that are controlled by an Operator to execute resize actions when an ORM is deployed, you can add these workloads to an exclusion list that is defined in a configmap
. The following example shows how to modify the kubeturbo
Custom Resource, called kubeturbo-release
in this example, which is part of the Operator Hub / Operator deployment method. You can use one or both of two filter parameters to exclude workloads
- Workload names =
operatorControlledWorkloadsPatterns
- Namespace names =
operatorControlledNamespacePatterns
kubectl edit kubeturbo kubeturbo-release
spec:
exclusionDetectors:
operatorControlledWorkloadsPatterns:
- turbon.*
- testing-.*
operatorControlledNamespacePatterns:
- turbonomic
- gke-.*
-
Verifyconfigmap
has been updated
After applying the updates, to verify they have be applied correctly there are two ways to confirm this.
First check the configmap
to verify the changes were applied, see example below:
kubectl get configmap turbo-config-kubeturbo-release -o yaml
Check for the section turbo-autoreload.config
to see the updates applied above
apiVersion: v1
data:
turbo-autoreload.config: |-
{
...
},
"exclusionDetectors": {
"operatorControlledWorkloadsPatterns": ["turbon.*","testing-.*"],
"operatorControlledNamespacePatterns": ["turbonomic","gke-.*"]
}
}
Second check the kubeturbo logs to ensure that it has picked up the changes correctly, look for the following entries in the logs:
kubectl logs kubeturbo-release-7cb4b886c5-glv9b | grep exclusion
Output will look similar to the below:
I1122 14:43:07.603989 dynamic_config.go:119] Operator controlled workload exclusion set to: []
I1122 14:43:07.604013 dynamic_config.go:125] Operator controlled namespace exclusion set to: [turbonomic]
I1122 19:14:52.314970 dynamic_config.go:119] Operator controlled workload exclusion set to: [turbon.* testing-.*]
I1122 19:14:52.315096 dynamic_config.go:125] Operator controlled namespace exclusion set to: [turbonomic gke-.*]
System Namespace Workloads Disabled
Starting in 8.10.6 there is a default behavior for workloads discovered in a set of well-known system namespaces will automatically have resize actions disabled. There will be a new setting in the KubeTurbo configmap
to create a new group and policy to disable resize actions for Container Spec workloads in the "System namespaces" such as those starting with kube-.*
, openshift-.*
, cattle.*
:
- Group name: "System Namespaced ContainerSpecs" (one per cluster)
- Fefault policy name: "System Namespaced ContainerSpecs Resize Disabled [k8s cluster name]"
To apply this configuration to the configmap
, the following example shows how to modify the kubeturbo
Custom Resource, called kubeturbo-release
in this example, which is part of the Operator Hub / Operator deployment method. You will identify Namespace name patterns, which when detected, will place all the discovered workloads in these namespaces into a group that will disable resize actions from generating.
kubectl edit kubeturbo kubeturbo-release
spec:
systemWorkloadDetectors:
namespacePatterns:
- kube-.*
- openshift-.*
- cattle.*
-
Verifyconfigmap
has been updated
After applying the updates, to verify they have be applied correctly there are two ways to confirm this.
First check the configmap
to verify the changes were applied, see example below:
kubectl get configmap turbo-config-kubeturbo-release -o yaml
Check for the section turbo-autoreload.config
to see the updates applied above
apiVersion: v1
data:
turbo-autoreload.config: |-
{
...
},
"systemWorkloadDetectors": {
"namespacePatterns": ["kube-.*","openshift-.*","cattle.*"]
}
}
Second check the kubeturbo logs to ensure that it has picked up the changes correctly, look for the following entries in the logs:
kubectl logs kubeturbo-release-7cb4b886c5-glv9b | grep -i "namespace det"
Output will look similar to the below:
I1122 14:43:07.603966 dynamic_config.go:113] System Namespace detectors set to: [kube-.*]
I1122 19:14:52.314883 dynamic_config.go:113] System Namespace detectors set to: [kube-.* openshift-.* cattle.*]
NodePool and MachineSet min/max settings
Turbonomic manages cluster resources based on changing demand (see Node Scaling use case), and makes recommendations to provision and suspend nodes in a node pool. These scaling decisions have a default min number of nodes of 1 and max of 1000 (which will be changed to 10). As of 8.10.1, the user will be able to adjust the default setting of min/max nodes for all discovered OpenShift MachineSets in that OpenShift cluster, including going to a minimum of 0. Note: Turbonomic will be not be able to increase the number of nodes up from 0, and the user is expected to have a way to create the first node in a MachineSet.
This configuration change would apply to all MachineSets. The user will adjust a parameter called nodePoolSize
. This is defined in the configMap
as shown below:
data:
turbo-autoreload.config: |-
{
"nodePoolSize": {
"min": 1,
"max": 1000
-
Changing the default values
You can change the default values to any values you like in each KubeTurbo deployment by updating the KubeTurbo deployment:
oc edit kubeturbo kubeturbo-release
spec:
nodePoolSize:
max: 100
min: 2
-
Verifyconfigmap
has been updated
After applying the updates, verify they have be applied correctly by checking the configmap
to verify the changes were applied, see example below:
oc get configmap turbo-config-kubeturbo-release -o yaml
data:
turbo-autoreload.config: |-
{
"nodePoolSize": {
"min": 2,
"max": 100
Node Scaling Future Enhancements
We will be enhancing the configuration option to allow a user to customize min/max settings per discovered OCP MachineSet, and expand the use case to NodePools from other supported Kubernetes platforms. The parameter will also be promoted to a policy setting in the UI for ease of use.