Namespaces, Resource Quota, LimitRange - q-uest/notes-doc-k8s-docker-jenkins-all-else GitHub Wiki
Namespaces
Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).
By default depending on installation method kubernetes creates 4 namespaces
default
The default namespace for objects with no other namespacekube-system
The namespace for objects created by the Kubernetes systemkube-public
This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.kube-node-lease
This namespace for the lease objects associated with each node which improves the performance of the node heartbeats as the cluster scales
we can create namespace using yaml or using command line to
-
using command line
kubectl create ns namespace_name
-
using yaml file
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
Setting the namespace preference You can permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace=<insert-namespace-name-here>
# Validate it
kubectl config view --minify | grep namespace:
to delete namespace
kubectl delete ns namespace_name
Resource quota
When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.
Enabling Resource Quota
Resource Quota support is enabled by default for many Kubernetes distributions. It is enabled when the API server --enable-admission-plugins= flag has ResourceQuota
as one of its arguments.
A resource quota is enforced in a particular namespace when there is a ResourceQuota in that namespace.
Example1: Basic Resource Quota
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts-3
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
pods: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
While checking the quota after applying the above resource quota object in the required namespace:
kubectl get quota
NAME AGE REQUEST LIMIT
object-counts-3 5s configmaps: 1/10, persistentvolumeclaims: 0/4, pods: 1/4, replicationcontrollers: 0/20, secrets: 1/10, services: 0/10, services.loadbalancers: 0/2
Let's check the resource quota by creating 5 replicas for the below deployment:
dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-app-deploy
labels:
name: voting-app-deploy
app: demo-voting-app
spec:
template:
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
selector:
matchLabels:
name: voting-app-pod
app: demo-voting-app
replicas: 5
kubectl apply -f dep.yaml
Check whether all the 5 pod's are created:
NAME DESIRED CURRENT READY AGE
replicaset.apps/voting-app-deploy-547678ccc7 5 3 3 19s
It had created only 3 pods due to the limit set by the resource quota.
upon describing the replicaset, we'll get to see the warning why it could not create all the 5 pods as requested.
Error creating: pods "voting-app-deploy-547678ccc7-978zs" is forbidden: exceeded quota: object-counts-3, requested: pods=1, used: pods=4, limited: pods=4
After bumping up the pod numbers, it created the deployment with the 5 pods.
Example: Create multiple resoruce quota objects with same resources:
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts-10
spec:
hard:
pods: "10"
The above one sets the pod quota to 10. But, there is another resource quota object with the same resource set to 4 already.
Checking which one it will take into consideration:
**Creating the same above deployment with "replicas: 10" failed, as it takes only the least value set between those 2 quota objects for the resource. **
Example: Resource Quota with resources
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
requests.nvidia.com/gpu: 4
Now, check the above deployment could be rolled out.
kubectl apply -f dep.yaml
It did not create the deployment and failed with the below error in the replicaSet of the deployment:
Error creating: pods "voting-app-deploy-547678ccc7-9nqzs" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
It is clear from the message above that the deployment/pod will fail if it does not have resource requests/limits set in its specification.
Checking the resourcequota & current usage:
kubectl get resourcequota
NAME AGE REQUEST LIMIT
compute-resources 19h requests.cpu: 0/1, requests.memory: 0/1Gi, requests.nvidia.com/gpu: 0/4 limits.cpu: 0/2, limits.memory: 0/2Gi
object-counts-10 19h pods: 0/10
object-counts-3 19h configmaps: 1/10, persistentvolumeclaims: 0/4, pods: 0/4, replicationcontrollers: 0/20, secrets: 1/10, services: 0/10, services.loadbalancers: 0/2
Update dep.yaml with the Resourcequota specification as below:
kind: Deployment
metadata:
name: voting-app-deploy
labels:
name: voting-app-deploy
app: demo-voting-app
spec:
template:
metadata:
name: voting-app-pod
labels:
name: voting-app-pod
app: demo-voting-app
spec:
containers:
- name: voting-app
image: kodekloud/examplevotingapp_vote:v1
ports:
- containerPort: 80
resources:
requests:
memory: "250Mi"
cpu: "250m"
limits:
memory: "250Mi"
cpu: "250m"
selector:
matchLabels:
name: voting-app-pod
app: demo-voting-app
replicas: 2
kubectl apply -f dep.yaml
The above deployment is created without any issues now.
The current status of resourcequota:
kubectl get resourcequota
NAME AGE REQUEST LIMIT
compute-resources 19h requests.cpu: 500m/1, requests.memory: 500Mi/1Gi, requests.nvidia.com/gpu: 0/4 limits.cpu: 500m/2, limits.memory: 500Mi/2Gi
object-counts-10 19h pods: 2/10
object-counts-3 19h configmaps: 1/10, persistentvolumeclaims: 0/4, pods: 2/4, replicationcontrollers: 0/20, secrets: 1/10, services: 0/10, services.loadbalancers: 0/2
======
Example: Bump up # of Replicas of the deployment (in dep.yaml) to 5:
It created only 4 pods out of the 5 requested.
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
voting-app-deploy 4/5 4 4 2m32s
Error with the deployment's replicaset:
exceeded quota: compute-resources, requested: requests.cpu=250m,requests.memory=250Mi, used: requests.cpu=1,requests.memory=1000Mi, limited: requests.cpu=1,requests.memory=1Gi
Bump up & apply the reesoure quota:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
requests.cpu: "1.5"
requests.memory: 1.5Gi
limits.cpu: "2"
limits.memory: 2Gi
requests.nvidia.com/gpu: 4
The requests.cpu & requests.memory have been updated as above & applied the same. Here is the current status of it:
kubectl get resourcequota
NAME AGE REQUEST LIMIT
compute-resources 19h **requests.cpu: 1/1500m, requests.memory: 1000Mi/1536Mi**, requests.nvidia.com/gpu: 0/4 limits.cpu: 1/2, limits.memory: 1000Mi/2Gi
object-counts-10 19h pods: 4/10
object-counts-3 19h configmaps: 1/10, persistentvolumeclaims: 0/4, pods: 4/5, replicationcontrollers: 0/20, secrets: 1/10, services: 0/10, services.loadbalancers: 0/2
Apply the dep.yaml and check whether it creates all the replicas:
kubectl delete -f dep.yaml
kubectl apply -f dep.yaml
It created all the 5 replicas as requested now.
========
LimitRange
With resource quotas, cluster administrators can restrict resource consumption and creation on a namespace basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace's resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
A LimitRange provides constraints that can:
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
limitrange.yaml:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "500m"
memory: "500Mi"
min:
cpu: "200m"
memory: "250Mi"
type: Container
In the above, the LimitRange is set to "container" type. It is also possible to set it to "Pod" type.
kubectl describe limitrange/cpu-min-max-demo-lr
Name: cpu-min-max-demo-lr
Namespace: qa
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu 200m 500m 500m 500m -
Container memory 250Mi 500Mi 500Mi 500Mi -
Create a pod to check whether it enforces the min values given for cpu/memory 200m & 259Mi respectively.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "100m"
memory: "100Mi"
This pod actually tries to request cpu/memory values lesser than the values set in the enforced limiterange above. In this case, the purpose of limitRange is to make sure that every pod requests atleast 200m CPU & 250Mi of memory. Here we got the below error:
Error from server (Forbidden): error when creating "new1.yaml": pods "nginx" is forbidden: [minimum memory usage per Container is 250Mi, but request is 100Mi, minimum cpu usage per Container is 200m, but request is 100m]
Update the pod spec with a revised memory/CPU requests:
resources:
requests:
cpu: "300m"
memory: "400Mi"
The above succeeds as it complies the min rules set by the limitRange.
Note:
If the above Pod does not have resource requests/limits included, still it will get created as those values are derived from the LimitRange object which will have the max limits set as the default value for both default-requests/default-limits. In case, if there is a ResourceQuota set for the namespace, the Pod won't be created, as the priority is given to the ResourceQuota over the LimitRange, and per rule it does not allow the Pod to be created.
=====
Example: Setting the default resource limits/requests to pods (that do not have any resource values in its spec):
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
Describing the limitRange:
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Create a pod wihout any resource specfication:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
Describing the pod shows the resource requests/limits set per the limitRange above:
Limits:
memory: 512Mi
Requests:
memory: 256Mi