工作负载 Deployment - nicoleShuaihui/k8s GitHub Wiki

工作负载(1)-Deployment

Kubernetes将设置pod的部署规则的对象称为工作负载(workload)。在部署应用时,我们通常不会直接创建pod。而是通过创建工作负载,让Kubernetes为我们创建和管理所需的pod。

常用的工作负载有如下5种:

  • Deployment 无状态应用,对外提供的服务是应用
  • Statefulset 有状态应有,存储集群。有一些是master,leader,绑定一个固定的存储
  • Daemonset 守护,监控,采集类agent
  • Job 任务,跑完就可以pod没有
  • Cronjob 定时任务

1. Deployment基本操作

Kubernetes Deployment是用于部署无状态应用。在实践中,我们开发的绝大部分应用都属于无状态应用,因此Deployment也是五类工作负载中最常用的。 (滚动部署-deployment(deployment控制replicationset(v1,v2,v3)-pod)---创建的RS是后面加了一个随机的字符串,pod-template-hash作为种子)

作为示例,我们来创建一个deployment

创建Deployment

首先编辑一个deploy.yml,内容如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: super-front
    codename: k8s-playground
    project: k8s-playground
  name: super-front
  namespace: 【你的环境名】
spec:
  minReadySeconds: 30     # pod启动后,当liveness和readiness均为true之后,经过min ready seconds的时间后,则认为容器启动成功
  replicas: 1    # pod实例数。如果启用HPA,则以HPA预期实例数为准。
  selector:
    matchLabels:
      app: super-front
  strategy:
    rollingUpdate:
      maxSurge: 1    # 在滚动升级过程中,可超出预期 25% 向上取整replicas的pod个数。max unavailable和max surge不能同时为0,否则将无法滚动升级。
      maxUnavailable: 0   # 在滚动升级过程中,处于不可用状态的pod数的上限。max unavailable和max surge不能同时为0,否则将无法滚动升级。 理解有点选举里面的不可用,配置maxSurge要小,不要会更小 25%,向下取整
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: super-front
        project: k8s-playground
    spec:
      containers:
      - env:   # 环境变量。key-value的形式。key需符合Linux环境变量名字要求
        - name: LANG
          value: en_US.UTF-8
        - name: LANGUAGE
          value: en_US:en
        - name: LC_ALL
          value: en_US.UTF-8
        image: nginx:latest
        imagePullPolicy: Always
        livenessProbe:   # container的生存状态探针。若该探针返回异常状态,则重启container。
          failureThreshold: 3
          initialDelaySeconds: 168   # container启动后,首次执行liveness probe的延迟时间
          periodSeconds: 20   # liveness probe执行周期时间
          successThreshold: 1
          tcpSocket:
            port: 80
          timeoutSeconds: 1
        name: super-front
        readinessProbe:   # container准备状态探针。若该探针返回异常状态,则认为container未准备好处理请求
          failureThreshold: 3
          httpGet: 
            path: /
            port: 80
            scheme: HTTP #提供header与get两种服务
          initialDelaySeconds: 11
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 10m    #container占用的cpu资源上限
            memory: 64Mi    #container占用的内存资源上限
          requests:
            cpu: 10m    #container预留的cpu资源
            memory: 64Mi   #container预留的内存资源
      dnsPolicy: Default
      restartPolicy: Always

通过kubectl apply创建Deployment:

# kubectl apply -f ./deploy.yml 
deployment.extensions/super-front created
PS:kubectl apply -f nginx-deploymen.yml --record :--record,标识来写入在资源注释`kubernetes.io/change-cause`中来执行命令。

获取查看Deployment

# kubectl get deploy -n pg-allen
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
super-front   1/1     1            1           4m28s

查看deployment 版本历史及回滚

kubectl rollout status deployment/tcloud-yunjing-srv -n tce
deployment "tcloud-yunjing-srv" successfully rolled out

kubectl rollout history deployment/tcloud-yunjing-srv -n tce
deployment.apps/tcloud-yunjing-srv 
REVISION  CHANGE-CAUSE #在record上面开启可以写明原因
8         <none>
9         <none>
10        <none>
kubectl rollout undo deployment/tcloud-yunjing-srv -n tce --to-revision=9 RS名字是原先的名字,pod不是原先的pod
kubectl scale deployment tcloud-yunjing-srv -n tce --replicas=4

查看RS版本与deployment及pod的关系

kubectl get rs -n tce 

tcloud-yunjing-vul-5c8cf844b7                              1         1         1       21d
tcloud-yunjing-vul-7dbb89b6df                              0         0         0       122d
tcloud-yunjing-vul-b46994665                               0         0         0       122d

查看对应的pod

# kubectl get po -n pg-allen -l app=super-front
NAME                           READY   STATUS              RESTARTS   AGE
super-front-5788dc997d-k8prp   0/1     ContainerCreating   0          10s

如果服务探针是liveness prod是正常运行的-running状态,及将就版本的pod给删除掉

编辑修改

# kubectl edit deploy -n pg-allen
# 按下回车后,将打开vi,可编辑deployment配置

在vi中保存并退出后,kubectl会检查配置,若配置无误将立即更新Deployment。

若Deployment的spec.strategy配置的升级策略为rollingUpdate,那么deployment更新后,pod将自动升级。

尝试将image改为nginx:1.16.1-alpine

查看修改后Deployment的image:

# kubectl get deploy super-front -n pg-allen  -oyaml |grep image:
        image: nginx:1.16.1-alpine

查看当前Deployment的revision

# kubectl get deploy super-front -n pg-allen  -oyaml  |grep revision
    deployment.kubernetes.io/revision: "2"
  revisionHistoryLimit: 10
  ... ...

可看出修改image后,revision为2。

查看Deployment历史记录

# kubectl rollout history deployment/super-front -n pg-allen
deployment.extensions/super-front 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

回滚Deployment

kubectl rollout undo命令可将deployment回滚到某次revision:

# kubectl rollout undo deployment/super-front -n pg-allen --to-revision=1
deployment.extensions/super-front rolled back

查看Deployment回滚后的image:

# kubectl get deploy super-front -n pg-allen  -oyaml |grep image:
        image: nginx:latest

查看事件

# kubectl describe deploy super-front -n pg-allen 
Name:                   super-front
Namespace:              pg-allen
CreationTimestamp:      Wed, 08 Apr 2020 10:53:40 +0800
Labels:                 app=super-front
                        codename=k8s-playground
                        project=k8s-playground
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=super-front
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        30
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:           app=super-front
                    codename=k8s-playground
                    project=k8s-playground
  Annotations:      cluster-autoscaler.kubernetes.io/safe-to-evict: true
  Service Account:  default
  Containers:
   super-front:
    Image:      nginx:latest
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     10m
      memory:  64Mi
    Requests:
      cpu:      10m
      memory:   64Mi
    Liveness:   tcp-socket :80 delay=168s timeout=1s period=20s #success=1 #failure=3
    Readiness:  http-get http://:80/ delay=11s timeout=1s period=20s #success=1 #failure=3
    Environment:
      LANG:      en_US.UTF-8
      LANGUAGE:  en_US:en
      LC_ALL:    en_US.UTF-8
    Mounts:      <none>
  Volumes:       <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   super-front-5b9db6d78c (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m17s  deployment-controller  Scaled up replica set super-front-75cb9b56fc to 1
  Normal  ScalingReplicaSet  4m28s  deployment-controller  Scaled up replica set super-front-5b9db6d78c to 1
  Normal  ScalingReplicaSet  3m38s  deployment-controller  Scaled down replica set super-front-75cb9b56fc to 0

导出Deployment配置

kubectl get deploy添加--export后,可导出工作负载的配置:

# kubectl get deploy super-front -n pg-allen -oyaml --export  > deploy.yml

删除Deployment

# kubectl delete deploy super-front -n pg-allen 
deployment.extensions "super-front" deleted

另一种删除方式:

# kubectl delete -f ./deploy.yml
deployment.extensions "super-front" deleted

PS:当要清除/停用某个App时,只删除pod是不行的,必须删除产生这个pod的工作负载。

扩展阅读:

⚠️ **GitHub.com Fallback** ⚠️