Kubernetes - nand0172ex/DevOps GitHub Wiki
โ Answer: Container ek lightweight aur portable unit hoti hai jisme:
- Application ka code
- Uske dependencies (libraries, binaries)
- Aur system-level settings hote hain
Yeh sab kuch ek package me hota hai jo any environment me same tarike se chalta hai โ chahe dev ho ya production.
๐ Example: Docker container, containerd
๐ฅ Advantages:
- Fast start hota hai
- Lightweight hote hain (VM se chhote)
- Easy to scale and deploy
โ Answer: VM ek full-fledged OS hoti hai jo ek hypervisor ke upar chalti hai.
VM me hota hai:
- Full Operating System (Linux/Windows)
- Apna virtual hardware (CPU, RAM, Disk)
- Aur application
๐ฆ Stack Example:
- Heavyweight (slow boot, zyada RAM/CPU usage)
- Har VM ka apna OS hota hai โ overhead zyada
โ Answer: Kubernetes cluster do parts me divided hota hai:
- kube-apiserver โ Sabse important component; sab communication isi ke through hoti hai
- etcd โ Cluster ka dimag; yeh data store karta hai
- kube-scheduler โ Pod ko kaunse node pe chalega, yeh decide karta hai
- kube-controller-manager โ Cluster state manage karta hai (e.g., pod replica, node status)
- cloud-controller-manager โ Cloud-specific tasks handle karta hai (optional)
- kubelet โ Node ke andar chalne wale pods ko manage karta hai
- kube-proxy โ Networking rules manage karta hai (ClusterIP, LoadBalancer, etc.)
- Container Runtime โ Jaise Docker ya containerd, yeh actual containers run karta hai
โ Answer: etcd ek key-value database hai jo pura Kubernetes cluster ka data store karta hai.
๐ง Yeh store karta hai:
- Sabhi pods, services, secrets, config maps ka data
- Cluster ka current state
- Control plane ke liye single source of truth
๐ก Kyu important hai?
- Agar etcd down ho gaya, to pura cluster fail ho sakta hai
- Isliye etcd ka backup lena bahut important hota hai
- Yeh Raft algorithm use karta hai taaki sab nodes me consistent data rahe
โ Answer: Container Runtime wo software hota hai jo actual containers ko create, run aur stop karta hai.
๐ฆ Kubernetes me:
- Kubernetes khud containers directly nahi chalata
- Yeh container runtime (jaise Docker, containerd, CRI-O) ko use karta hai via kubelet
๐ Common Container Runtimes:
- Docker (legacy)
- containerd (lightweight aur CNCF recommended)
- CRI-O (specially designed for Kubernetes)
๐ง Summary: Without container runtime, Kubernetes sirf planning kar sakta hai โ execute nahi.
โ Answer: Yeh steps master node ke liye hain (similar worker pe bhi lagte hain):
- 2 ya zyada Linux nodes (Ubuntu/CentOS)
- Swap disabled
- Hostname, firewall, container runtime (e.g., containerd) set
โ Answer: Pod Kubernetes ka smallest deployable unit hota hai. Ek pod ke andar:
- Ek ya zyada containers ho sakte hain
- Sab containers same network namespace (IP, port) aur volumes share karte hain
๐งฉ Example: Agar tumhare paas ek app container hai aur ek logging sidecar container โ dono ek hi pod me rahenge.
๐ Relation with Containers:
- Kubernetes containers ko directly deploy nahi karta
- Har container ko ek pod ke andar chalaya jaata hai
๐ง Samajhne ke liye:
Pod = Container(s) + Shared Environment
โ Answer: kube-scheduler Kubernetes control plane ka part hai jo decide karta hai ki kaunsa pod, kaunse node pe chalega.
๐ Scheduler kya check karta hai:
- Resource request (CPU, Memory)
- Taints & Tolerations
- Node Affinity / Anti-Affinity
- Pod Affinity / Anti-Affinity
- Node ka health status (Ready / NotReady)
- Constraints and custom policies
๐ง Important Point:
Scheduler sirf pod assign karta hai โ run karne ka kaam kubelet karta hai.
โ Answer: StatefulSet ek controller hota hai jo stateful apps (jaise DBs, Kafka) ko manage karta hai โ jisme pod identity important hoti hai.
๐งพ Pod Management Policy ke 2 mode hote hain:
-
OrderedReady (Default)
- Pods ek ke baad ek banenge (0 โ 1 โ 2โฆ)
- Agla pod tabhi banega jab pehla pod
Ready
ho jaaye - Useful for cases like DB cluster setup
-
Parallel
- Sare pods ek saath create ho jaate hain
- Order ka importance nahi hota
- Fast rollout ke liye useful hai
๐ Example YAML: yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: my-app spec: podManagementPolicy: Parallel
โ
Answer:
Deployment update karne ke liye aap kubectl apply
ya kubectl set image
commands use kar sakte ho.
- YAML file me image version update karke: kubectl apply -f deployment.yaml
- Direct image update karne ke liye: kubectl set image deployment/ =: Kubernetes rolling update strategy use karta hai, jisse downtime nahi hota.
โ Answer: Stateful applications ke liye autoscaling thoda tricky hota hai kyunki:
- StatefulSet pods ke unique identity hoti hai.
- Scale-out and scale-in order maintain karna zaroori hota hai.
- Horizontal Pod Autoscaler (HPA) StatefulSet pe use kar sakte hain, par custom metrics aur readiness probe sahi honi chahiye.
Kuch cases me vertical scaling ya custom operators better hote hain.
โ Answer: Kubernetes me storage ke access modes 3 hote hain:
-
ReadWriteOnce (RWO):
- Volume ek hi node se read/write ho sakta hai.
-
ReadOnlyMany (ROX):
- Volume multiple nodes se read-only access ho sakta hai.
-
ReadWriteMany (RWX):
- Volume multiple nodes se read/write access kar sakta hai.
Yeh modes PersistentVolume (PV) aur PersistentVolumeClaim (PVC) me define hote hain.
โ Answer:
-
Node Affinity: Pods ko specific nodes pe schedule karne ke liye rules define karta hai.
Example: Pod sirf "zone=us-east-1a" wale nodes pe chale. -
Node Anti-Affinity: Pods ko avoid karne ke liye use hota hai ki woh same node ya group me na aaye.
Example: Same app ke pods ek hi node pe na rahe, to failure se bacha ja sake.
โ Answer: Scheduler pods ko place karta hai nodes pe jo resource requirements match karein:
- CPU aur Memory requests/limits
- Node health (Ready)
- Node taints/tolerations
- Affinity/anti-affinity rules
- Custom scheduling policies
Scheduler pehla check karta hai available nodes, fir best fit node select karta hai.
โ Answer:
Horizontal Pod Autoscaler (HPA) configure karne ke liye:
- Metric server install karo (resource metrics provide karega).
- HPA resource create karo: kubectl autoscale deployment --cpu-percent=50 --min=1 --max=5 Yeh deployment ko scale karega CPU usage ke basis pe.
โ Answer: Kubernetes me network plugins (CNI) jaise Calico, Flannel, Weave Net install karne ke liye:
- Cluster init ke baad: kubectl apply -f
- Example (Calico): kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml Plugin cluster me pod-to-pod networking, IP management, aur network policies handle karta hai.
โ Answer: High Availability (HA) me multiple control-plane nodes hote hain:
- Control-plane nodes ko load balancer ke peeche rakho.
- Etcd cluster 3+ nodes pe configure karo.
- Worker nodes multiple control-plane nodes se connect hote hain.
- Ensure ki kube-apiserver, scheduler, controller-manager redundant ho.
โ Answer: kube-controller-manager multiple controllers ka collection hai jo cluster state maintain karta hai:
- Node controller (node health check)
- Replication controller (desired pod replicas maintain)
- Endpoints controller (service endpoints manage)
- Service account & token controllers
Yeh continuously cluster ke current state ko desired state ke saath match karta hai.
โ Answer: kubelet node pe chalne wala agent hai jo:
- Control plane se commands receive karta hai
- Pods aur containers ko start/stop/manage karta hai
- Node health report karta hai
- Resource usage monitor karta hai
Node ki reliability me kubelet central role play karta hai.
โ Answer: Kube-proxy cluster me networking ke liye responsible hota hai:
- Node pe chalke services ke liye networking rules banata hai.
- Pod aur service ke beech traffic route karta hai.
- Load balancing provide karta hai multiple pods ke beech.
- IP tables ya IPVS use karta hai network traffic manage karne ke liye.
โ
Answer:
Helm ek package manager hai Kubernetes ke liye. Yeh Kubernetes applications ko easily deploy, configure, aur manage karne me madad karta hai. Helm charts predefined templates hote hain jo applications ko Kubernetes cluster me deploy karte hain.
Helm ki madad se complex apps ko ek command me install/update kar sakte hain, jisse deployment process fast aur error-free ho jata hai.
โ
Answer:
Deployment ek Kubernetes resource hai jo declarative way me pods aur replica sets ko manage karta hai. Iska kaam hai desired state define karna (kitne pods chahiye, kaunsi image use karni hai).
Deployment continuously cluster me desired state maintain karta hai:
- Agar pod fail ho jaye to naye pods create karta hai.
- Update karne par rolling update karta hai.
โ Answer:
-
Rolling Update:
Deployment ke pods gradually update hote hain bina downtime ke. Old pods gradually terminate hote hain aur naye pods start hote hain. -
Rollback:
Agar update me problem aaye to deployment ko previous stable version pe wapas le ja sakte hain. Yehkubectl rollout undo deployment/<name>
se hota hai.
โ
Answer:
(Same as 23, duplicate question)
Rolling updates me pod update gradual hota hai without downtime. Rollback se pichla stable version restore kar sakte hain agar naye update me koi problem aaye.
โ Answer:
-
Taints:
Nodes ko mark karte hain jisse scheduler un nodes pe pods ko schedule karna avoid kare unless pods me matching tolerations ho. -
Tolerations:
Pods me define kiya jata hai taints ko tolerate karne ke liye. Agar pod me node ke taint ke liye toleration ho to pod us node pe schedule ho sakta hai.
Isse control milta hai ki pods kahan schedule ho sakte hain.
โ
Answer:
Scheduler decide karta hai ki pods ko kaunse nodes pe schedule karna hai based on:
- Resource requests (CPU, memory)
- Node health
- Affinity/Anti-affinity rules
- Taints and tolerations
- Available resources
Scheduler cluster me pod placement ke liye responsible hota hai.
โ
Answer:
Data persistence ke liye Kubernetes Persistent Volumes (PV) aur Persistent Volume Claims (PVC) use karta hai.
- PV cluster me storage resources represent karta hai (NFS, Cloud disks, etc).
- PVC pods ke liye storage request karta hai.
Jab pod reschedule hota hai, PVC usi PV ko attach karta hai jisse data loss nahi hota.
โ Answer:
Scaling Type | Description | Example |
---|---|---|
Horizontal Scaling | Pods ki sankhya badhakar workload distribute karna | 3 pods se 6 pods tak scale up |
Vertical Scaling | Single pod ke resources (CPU, Memory) badhana | Pod ka CPU 1 core se 2 core karna |
Horizontal scaling zyada common aur fault tolerant approach hai.
โ
Answer:
Reclaim Policy batata hai ki jab Persistent Volume release ho jata hai to uske saath kya karna hai:
- Retain: Volume data safe rakhta hai, manual cleanup zaroori hota hai.
- Recycle: Volume ko clean kar ke dubara reuse karta hai (deprecated).
- Delete: Volume delete ho jata hai (usually cloud storage).
โ
Answer:
Metrics Server cluster-wide resource usage (CPU, Memory) ko collect karta hai. Yeh HPA (Horizontal Pod Autoscaler) ko real-time metrics provide karta hai, jisse pods ko scale karna possible hota hai.
Metrics Server bina install ke HPA nahi chalega.
Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail to launch due to lack of resources or when nodes are underutilized.
- It checks pending pods that can't be scheduled.
- Scales up nodes to accommodate those pods.
- Scales down underutilized nodes (based on thresholds).
Autoscaling in Kubernetes ensures optimal resource usage by dynamically adjusting compute capacity.
- HPA (Horizontal Pod Autoscaler) scales pods based on CPU/memory.
- VPA (Vertical Pod Autoscaler) adjusts resource requests/limits.
- Cluster Autoscaler scales the number of nodes.
Importance:
- Cost-efficiency
- High availability
- Performance optimization
You need to define a PVC and use it in the pod spec under volumes and volumeMounts.
PVC YAML apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
POD YAML apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mycontainer image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: mypvcvol volumes: - name: mypvcvol persistentVolumeClaim: claimName: mypvc
Use a shared volume between containers defined inside the same pod.
- Define a volume in
volumes:
- Mount it in each container via
volumeMounts:
- Use CloudWatch Container Insights for metrics and logs.
- Install Prometheus + Grafana for detailed monitoring.
- Use FluentBit/Fluentd to ship logs to CloudWatch or Elasticsearch.
StorageClass defines the type of storage (like SSD, HDD) and provisioner.
- Allows dynamic provisioning of volumes.
- Each class uses a provisioner (e.g. AWS EBS).
Example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast-ssd provisioner: kubernetes.io/aws-ebs parameters: type: gp2
When a PVC is created referencing a StorageClass, Kubernetes auto-provisions a PV using the provisioner defined in that StorageClass.
- Uses VPC CNI Plugin for networking.
- Supports NetworkPolicies via Cilium/Calico.
- Integration with ELB, PrivateLink, App Mesh.
- PV (Persistent Volume): Pre-provisioned or dynamically provisioned volume.
- PVC (Persistent Volume Claim): A request for storage by a user.
- Export manifests from existing cluster.
- Set up EKS via eksctl or console.
- Recreate PVs or migrate via Velero.
- Apply manifests in EKS.
- Use Multi-AZ setup in node groups.
- Enable control plane HA (default in EKS).
- Distribute workloads across zones.
- IAM roles for service accounts (IRSA).
- VPC-level isolation.
- Pod Security Policies or OPA/Gatekeeper.
- Encryption with KMS.
- HPA scales pods.
- Cluster Autoscaler scales nodes.
- Works with EC2 Auto Scaling Groups.
- emptyDir
- hostPath
- configMap
- secret
- persistentVolumeClaim
- awsElasticBlockStore, gcePersistentDisk etc.
HPA scales the number of pods based on CPU/memory or custom metrics.
kubectl autoscale deployment myapp --cpu-percent=50 --min=2 --max=5
Volumes in Kubernetes are used to store data that persists beyond container restarts.
Importance:
- Share data between containers.
- Persist data between pod restarts.
- Resource usage (CPU/memory)
- Application traffic pattern
- Pod startup time
- Min/Max replicas
- Cost constraints
- Apply deployment manifests
- Rollout updates
- Monitor deployment status
- Port-forward services for testing
- Headless Service has
clusterIP: None
- Used with StatefulSets to provide DNS-based stable identities to pods.
Example: apiVersion: v1 kind: Service metadata: name: myservice spec: clusterIP: None selector: app: myapp ports:
- port: 80 targetPort: 9376
- Ensures each pod has a sticky identity (name + hostname).
- Pod startup and termination happens in order.
- The pod may not immediately reschedule.
- Once node is unreachable, controller schedules it to another node while preserving identity.
- Updates are done pod-by-pod in order.
- Risk: Stateful apps may need careful coordination, downtime if misconfigured.
-
Two types:
- OrderedReady: default, orderly scaling and update
- Parallel: all at once
- Context defines cluster, user, and namespace.
kubectl config get-contexts kubectl config use-context mycluster
~/.kube/config stores all configurations.
Kubernetes cluster ki health check karne ke liye aap kubectl
ka use kar sakte ho. Kuch basic commands:
-
Cluster info check karne ke liye:
kubectl cluster-info -
Nodes ka status check karne ke liye:
kubectl get nodes -
Pods ka health status:
kubectl get pods --all-namespaces -
Specific component ka describe:
kubectl describe pod -n
ReplicaSet ek controller hai jo ensure karta hai ki specified number of pod replicas running state me ho. Agar koi pod crash ho jaye, to ReplicaSet new pod create kar deta hai.
Feature | ReplicaSet | Replication Controller |
---|---|---|
Label Selector | Supports set-based selectors | Only supports equality-based selectors |
Usage | Newer and recommended | Old and deprecated |
Integration | Used with Deployments | Not used with Deployments |
ReplicaSet ko aap YAML file ke through define karte ho. Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx
Apply karne ke liye:
kubectl apply -f replicaset.yaml
-
Pod logs dekhne ke liye:
kubectl logs -
Pod shell me jaane ke liye:
kubectl exec -it -- /bin/bash -
Describe pod ya service:
kubectl describe pod
kubectl describe svc -
Events dekhne ke liye:
kubectl get events --sort-by='.metadata.creationTimestamp'
Agar ReplicaSet ke under koi pod fail ho jata hai to ReplicaSet automatically ek naya pod create karta hai taki desired replica count maintain rahe.
-
YAML file se resource create karne ke liye:
kubectl apply -f resource.yaml -
Resource ko edit karne ke liye:
kubectl edit -
Resource ko delete karne ke liye:
kubectl delete -f resource.yaml
-
Multiple replicas specify karo in YAML:
replicas: 3 -
Pods ko multiple nodes par distribute karne ke liye anti-affinity rules ya taints/tolerations use karo.
-
Health checks use karo via livenessProbe and readinessProbe.
Direct ReplicaSet ko update karne se disruption ho sakta hai. Best practice hai ki ReplicaSet ko Deployment ke through manage karo.
Agar direct update karna ho:
-
YAML me image ya config update karo
-
Dubara apply karo:
kubectl apply -f replicaset.yaml
Ya phir:
kubectl set image rs/my-replicaset myapp-container=nginx:1.21
But preferred approach hai Deployment ka use karna for rolling updates.
ReplicaSet ka use tab hota hai jab hume ensure karna ho ki kuch specific number of pod replicas hamesha cluster me running ho. Kuch common use cases:
-
High availability ensure karne ke liye
-
Load balancing across multiple pod replicas
-
Automatic pod replacement on failure
Deployment ek higher-level abstraction hai jo ReplicaSet ko manage karta hai. Deployment se hum:
-
ReplicaSets create/update karte hain
-
Rolling updates aur rollbacks perform karte hain
Deployment specification me ReplicaSet ka template diya jata hai.
Haan, ReplicaSet ko manually ya programmatically scale kiya ja sakta hai.
Text command:
kubectl scale rs --replicas=5
StatefulSet me Persistent Volumes (PV) ka role crucial hota hai, kyunki ye har pod ke liye unique data persistence ensure karta hai. Agar pod delete ho bhi jaye, uska volume preserve rehta hai.
Kubernetes har StatefulSet pod ko ek stable identity deta hai jisme:
-
Stable DNS name (jaise
web-0
,web-1
) -
Stable Storage (PVC)
Isse pods ko reschedule hone par bhi same identity rehti hai.
Feature | Deployment | StatefulSet |
---|---|---|
Pod identity | Dynamic | Stable |
Storage | Shared/Ephemeral | Persistent per pod |
Use case | Stateless apps | Stateful apps |
Deployment ke pods agar failed node me hain, to kube-scheduler unhe automatically healthy nodes me reschedule karta hai โ provided unke liye resources available hain.
Desired state wo configuration hai jo user define karta hai โ jaise number of replicas, image version, etc. Kubernetes controllers constantly current state ko desired state ke saath match karne ki koshish karte hain.
Replicas ensure karte hain ki application highly available ho. Agar koi pod fail ho jaye to Deployment automatically naye pod create karta hai to match desired replica count.
Privileged pod wo hota hai jisko host machine ke low-level resources ka access milta hai jaise:
-
Host network
-
Kernel modules
-
System devices
YAML Example:
securityContext:
privileged: true
Agar pod schedule nahi ho pata (due to lack of resources, node taints, etc.) to:
-
Wo
Pending
state me rehta hai -
Scheduler repeatedly try karta hai jab tak suitable node mil jaye
-
kubectl describe pod
se failure ka reason mil sakta hai
DaemonSet har node pe ek pod run karta hai. Ye typically use hota hai logging, monitoring, aur network agents ke liye.
-
Jaise hi naya node join kare, DaemonSet ka pod usme deploy ho jata hai
Kubernetes pod scheduling fail hone par retry karta hai jab tak:
-
Resources available ho jayein
-
Taints/tolerations ya affinity rules satisfy ho jayein
Pod Pending
state me rehta hai aur kubectl describe
me failure ka cause dikhata hai.
Same as Q74. Pod Pending
state me rehta hai jab tak scheduler uske liye node nahi dhoondh pata. Common causes:
-
Resource shortage
-
Taints/No matching tolerations
-
Affinity/Anti-affinity mismatch
Tools like:
-
Prometheus + Grafana
-
EFK stack (Elasticsearch, Fluentd, Kibana)
-
Metrics Server
Commands:
kubectl top pod
kubectl logs
Microservices architecture me app multiple independent services me split hoti hai. Containers isme benefit dete hain:
-
Isolation
-
Scalability
-
Easy CI/CD and rollback
-
Language/runtime independence
Dockerfile ek script hai jo container image build karne ke liye instructions deta hai.
Example:
FROM nginx:latest
COPY ./index.html /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
-
Same Pod: localhost (inter-container communication)
-
Same Node: Bridge network
-
Across Nodes: CNI plugin (Calico, Flannel, etc.)
-
Services provide stable DNS/IP for accessing pods
Example:
kubectl exec -it -- curl :
Namespaces container ke resources ko isolate karte hain. Har container ko ek alag environment milta hai jisme uska apna network, PID, mount points, etc. hota hai.
Container registry ek storage location hoti hai jahan Docker images ko push aur pull kiya jata hai. Popular registries: Docker Hub, ECR, GCR.
Container orchestration ka matlab hai multiple containers ko manage karna โ unka deployment, scaling, networking, aur health management. Kubernetes is a leading tool for this.
Namespaces cluster ke resources ko logical groups me divide karte hain. Har namespace ka apna RBAC, Network Policies wagairah ho sakta hai. Isse security aur isolation achieve hoti hai.
RBAC define karta hai ki kaun kya kar sakta hai Kubernetes cluster me. Ye roles, clusterRoles, roleBindings, aur clusterRoleBindings ke through kaam karta hai.
PodSecurityPolicy (PSP) ek cluster-level resource tha jo define karta tha ki pod ko kis tarah ka access mil sakta hai โ jaise privileged access, hostNetwork, etc. (Note: PSP deprecated hai, alternatives: OPA/Gatekeeper)
NetworkPolicy define karta hai ki kaun se pods ek dusre se communicate kar sakte hain. Ye ingress aur egress rules ke through define hota hai.
Secrets sensitive data (passwords, tokens, keys) ko securely store karne ke liye use hote hain. Ye base64 encoded hote hain aur pods me as env var ya volume mount ki tarah inject kiye ja sakte hain.
Kubernetes TLS certificates use karta hai authentication aur encryption ke liye. kubeadm, cert-manager jaise tools help karte hain certs ko manage karne me.
- RBAC ka use karo
- Network Policies apply karo
- Secrets ko encrypt karo
- Least privilege access principle follow karo
- Image scanning karo
- Fluentd, Prometheus, Grafana, ELK stack use karo
- Audit Logs enable karo
- Falco jaisa runtime security tool use karo
RBAC se hum control kar sakte hain ki user ya service account ko kya access milta hai. Ye security ko tightly control karne ke liye zaroori hai.
- Role: Specific namespace ke liye permissions define karta hai
- RoleBinding: Role ko user/service account se bind karta hai
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: myclaim
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: /data
name: myvolume
volumes:
- name: myvolume
hostPath:
path: /tmp/data
type: Directory
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /nfs/data
server: 192.168.1.100
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nfs-vol
volumes:
- name: nfs-vol
persistentVolumeClaim:
claimName: nfs-pvc
apiVersion: v1
kind: Pod
metadata:
name: ramdisk-pod
spec:
containers:
- name: app
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: /tmp
name: dshm
volumes:
- name: dshm
emptyDir:
medium: Memory
Kubernetes container ko recreate karta hai agar liveness probe fail hoti hai ya pod delete ho jata hai. Restart policy bhi decide karti hai behavior: Always, OnFailure, Never.
- Network policy laga ke egress restrict karo
- RBAC me access deny karo
- Service Account me unneeded tokens na inject karo (automountServiceAccountToken: false)
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
automountServiceAccountToken: false
containers:
- name: app
image: busybox
command: ["sleep", "3600"]