SCHEDULING IN K8s - pracip96/K8s-Learning GitHub Wiki

MANUAL SCHEDULING

This can be done by specifying NnodeName as a parameter in the spec section of the pod, pod will get scheduled to that node. But this can be done only to a new pod. To manually schedule a POD that is already running, we need to create a binding object, mention the pod label in that binding object, send a POST req to binding object api, with data set in json format

kubectl replace --force -f nginx.yaml [This is to enforce changes and apply it]

TAINTS and TOLERATIONS

TOLERATIONS are done at POD level TAINTS are done at NODE level

kubectl taint node <node_name> key=value: [Valid values of effect are NO_SCHEDULE, PREFER_NO_SCHEDULE, NO_EXECUTE]

kubectl taint node node01 app=blue:NoSchedule

TOLERATIONS are as below in pod definition yaml

image

Also, PODS Do Not get scheduled on MASTER Node, because the MASTER NODE has a TAINT

GENERATION of POD YAML

kubectl run <pod_name> --image=<image_name> --dry-run=client -o yaml > pod_definition.yaml

NODE-SELECTOR

In the below image, we are going to place our workload only on the Node01 which is labelled as size:large:

image

To label the nodes, run the below cmd: kubectl label nodes <node_name> =

We used single label and selector above.

To have multiple combinations of the above, such as place pods are L and M sizes or not only on S size, we use NODE-AFFINITY & ANTI-AFFINITY

NODE-AFFINITY

Helps us to Schedule POD on a PARTICULAR NODE

image

In the below IMAGE, the exists operator checks if there are LABELS present on the NODE

Node Affinity Types:

requiredDuringSchedulingIgnoredDuringExecution preferredDuringSchedulingIgnoreDUringExecution

requiredDuringSchedulingIgnoredDuringExecution When a Node with specified label is not found for node, then the pod will not be scheduled

preferredDuringSchedulingIgnoreDUringExecution If Pod placement is not crucial, but running is important, then the preferred affinity is useful

What if the Kubernetes Admin removed LABELS on the Nodes, then second part comes into picture IgnoreDuringExecution, pods will continue to run on the node

PLANNED requiredDuringSchedulingRequiredDuringExecution preferredDuringSchedulingRequiredDuringExecution

⚠️ **GitHub.com Fallback** ⚠️