1. helm command
1) helm create hischart
Chart.yaml charts templates values.yaml
2) helm list
3) helm install anyname ./hischart/ # deploy app
4) helm uninstall anyname
5) helm repo add bitnami https://charts.bitnami.com/bitnami #work with repository
6) help repo update/list
7) helm install anyname ./hischart/ --set key=value
8) vim values.yaml
9)vim env-values.yaml
example:
-name: 'myid'
value: 'hongqi'
10)vim deployment.yaml
env:
{{- range: .Values.name }}
name: {{ .name }}
value: {{.value}}
{{ end}}
11) helm template anyname ./hischart/
---
# Source: hischart/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: anyname-hischart
labels:
helm.sh/chart: hischart-0.1.0
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: ml-service-account
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
# Source: hischart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: anyname-hischart
labels:
helm.sh/chart: hischart-0.1.0
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
---
# Source: hischart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: anyname-hischart
labels:
helm.sh/chart: hischart-0.1.0
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
template:
metadata:
labels:
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
spec:
serviceAccountName: anyname-hischart
securityContext:
{}
containers:
- name: hischart
securityContext:
{}
image: "nginx:1.16.0"
env:
- name: DB_HOST
value: "mysql-0.mysql" # Connects to primary MySQL pod
- name: DB_USER
valueFrom:
secretKeyRef:
name: mysql-secrets
key: username
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: hischart/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "anyname-hischart-test-connection"
labels:
helm.sh/chart: hischart-0.1.0
app.kubernetes.io/name: hischart
app.kubernetes.io/instance: anyname
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['anyname-hischart:80']
restartPolicy: Never
2. Generate data models
get from helm template anyname ./hischart/
grafana analysis
git clone https://github.com/grafana/helm-charts.git
cd helm-charts
0) hm template myfluentbit charts/fluentbit > myflentbit
1) hm template myloki charts/loki > myloki
2) hm template mygrafana charts/grafana > mygrafana
3. argocd command
1) argocd app create my-app \
--repo https://github.com/your-username/your-repo.git \
--path my-app \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
2) argocd app sync my-app
or apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-username/your-repo.git
targetRevision: HEAD
path: my-app
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
4. terraform command
1) 1.terraform init
2.terfaform validate
3.terraform plan -auto-approve
4.terrafrom apply -auto-approve
2) git clone https://github.com/terraform-providers/terraform-provider-aws.git
Cd terraform-provider-aws/examples/eks-getting-started
Terraform plan
Terraform apply
3) use environment variables
export TF_VAR_region="us-west-1"
export TF_VAR_instance_count=3
variable "region" {
description = "AWS region"
type = string
}
provisioner "remote-exec" {
inline = ["echo $MY_ENV_VAR"]
environment = {
MY_ENV_VAR = "some_value"
}
}
locals {
current_user = var.user != "" ? var.user : getenv("USER")
}
terraform apply -var="environment=prod" -var="nodes=5"
4 )terraform deploy as virtualbox
terraform {
required_providers {
virtualbox = {
source = "terra-farm/virtualbox"
version = "0.2.2-alpha.1"
}
}
}
variable "short_name" {default = "init_myproj"}
module "ssh-key" {
source ="./ssh"
short_name = "${var.short_name}"
}
resource "virtualbox_vm" "node" {
count = 1
name = format("node-%02d", count.index + 1)
image = "./trusty-server-cloudimg-amd64-vagrant-disk1.box"
cpus = 2
memory = "512 mib"
user_data = <<-EOT
#!/bin/bash
echo "Hello, World" > /var/tmp/hello.txt
yum update -y
EOT
network_adapter {
type = "bridged" # First adapter: Bridged networking
host_interface = "en0: Wi-Fi (AirPort)" # Specify the host adapter
}
output "IPAddr" {
value = element(virtualbox_vm.node.*.network_adapter.0.ipv4_address, 1)
}
output "IPAddr_2" {
value = element(virtualbox_vm.node.*.network_adapter.0.ipv4_address, 2)
}
2) module share variable
module "module_one" {
source = "./module_one"
my_variable = "Some Value"
}
module "module_two" {
source = "./module_two"
input_variable = module.module_one.my_variable_output
}
4) terraform try run
5. ansible commmand
Ansible:
1) Idempotency,
优先使用Ansible模块 如yum, apt, file, copy等,而非shell/command。
使用state参数 明确指定present/absent/latest等状态。
条件控制 对shell/command使用creates或removes:
creates: /tmp/lockfile。
测试模式 用--check模拟运行:
ansible-playbook playbook.yml --check --diff。
2) roles, and
# site.yml
- hosts: webservers
roles:
- my_role # 直接引用角色名称
## roles/nginx/tasks/main.yml
- name: Install Nginx
apt:
name: nginx
state: present
- name: Copy Nginx config
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: restart nginx # 触发handler
3) inventory: ansible.cfg, hosts
ansible-playbook -i inventory/staging/ playbooks/deploy.yml
# ansible.cfg,
[defaults]
inventory = ./hosts
host_key_checking = false
log_path=./ansible.log
# hosts
# inventory/production/hosts
[web]
web1.example.com ansible_user=ubuntu
web2.example.com
7. Jenkins command
1) declarative vs scripted pipeline
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'make'
}
}
2)pipeline
0. Code clone -->
1. static test
2. CI: Build
3. CI: Build Docker Image
4. Push to Container Registry
5. CD: Deploy to Kubernetes
6. Smoke Tests
7. Notify Team
3) configuration -- General
* Discard old builds
* Do not allow concurrent builds
* Abort previous builds
* Do not allow the pipeline to resume if the controller restarts
* GitHub project
* Permission to Copy Artifact
* Pipeline speed/durability override
* Preserve stashes from completed builds
* This project is parameterized
4) configuration -- Build Triggers
* Build after other projects are built
* Build periodically
* Build when a change is pushed to BitBucket
* GitHub hook trigger for GITScm polling
* Poll SCM
* Quiet period
* Trigger builds remotely (e.g., from scripts)
8. docker command
1. docker build -f Dockerfile -t $APP .
2. docker push username/repo:tag
3. docker run --name $app -p $port:$port -d \
-e PORT=$port -v `pwd`:/tmp \
$image # override PORT
4. docker logs container_name
docker stats # Live resource usage
docker top container_name
docker inspect container_name
docker network create my_network
1)ENTRYPOINT ["echo", "Hello"] #just append
docker run myimage world # echo Hello world
2)CMD ["echo","Hello"] # just override
docker run myimage echo ls # echo ls
9. AWS command
1) secrect manager
aws secret-manager list-secrets
aws serret-manager create-secret --name my-secret
--description "for password"
--secret-string "MySecretStringXYZ"
--kms-key-id alias/prod-secrets-key
aws secret-manager get-secret-value --secret-id my-secret
aws secret-manager describe-secret --secret-id my-secret
ws secret-manager update-secret --secret-id my-secret --secret-string nnnnnnnn
aws secret-manager delete-secret --secret-id my-secret
AWS Secrets Manager default KMS key aws/secretsmanager for your account and region
2) create network
1. aws ec2 create-vpc \
--cidr-block 10.0.0.0/16 \
--tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=MyVPC}]'
# main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = { Name = "Main-VPC" }
}
2. aws ec2 create-subnet \
--vpc-id vpc-12345678 \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=PublicSubnet}]'
resource "aws_subnet" "public" { # web for public access
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 1}.0/24"
availability_zone = element(["us-east-1a", "us-east-1b"], count.index)
map_public_ip_on_launch = true
tags = { Name = "Public-Subnet-${count.index + 1}" }
}
resource "aws_subnet" "private" { # app or db for private
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 3}.0/24"
availability_zone = element(["us-east-1a", "us-east-1b"], count.index)
tags = { Name = "Private-Subnet-${count.index + 1}" }
}
# Create the Internet Gateway
3. aws ec2 create-internet-gateway \
--tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=MyIGW}]'
# Attach to VPC
4. aws ec2 attach-internet-gateway \
--internet-gateway-id igw-12345678 \
--vpc-id vpc-12345678
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {Name = "Main-IGW"}
}
resource "aws_eip" "nat" {
domain = "vpc"
}
resource "aws_nat_gateway" "nat" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public[0].id
tags = { Name = "Main-NAT" }
}
# Create route table
5. aws ec2 create-route-table \
--vpc-id vpc-12345678 \
--tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=PublicRouteTable}]'
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = { Name = "Public-Route-Table" }
}
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat.id
}
tags = { Name = "Private-Route-Table"}
}
# Add route to internet gateway
6. aws ec2 create-route \
--route-table-id rtb-12345678 \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-12345678
resource "aws_route" "internet_access" {
route_table_id = aws_route_table.public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
resource "aws_route" "private_outbound" {
route_table_id = aws_route_table.private.id
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}
# Associate route table with subnet
7. aws ec2 associate-route-table \
--route-table-id rtb-12345678 \
--subnet-id subnet-12345678
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "private" {
count = length(aws_subnet.private)
subnet_id = aws_subnet.private[count.index].id
route_table_id = aws_route_table.private.id
}
8 aws ec2 create-security-group \
--group-name MySecurityGroup \
--description "My security group" \
--vpc-id vpc-12345678
resource "aws_security_group" "web" {
name = "web-sg"
description = "Allow HTTP/HTTPS traffic"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Web-Security-Group"
}
}
9 aws ec2 create-vpc-peering-connection \
--vpc-id vpc-12345678 \
--peer-vpc-id vpc-87654321 \
--peer-region us-west-2 \
--tag-specifications 'ResourceType=vpc-peering-connection,Tags=[{Key=Name,Value=MyPeeringConnection}]'
resource "aws_vpc_peering_connection" "peer" {
peer_vpc_id = aws_vpc.other_vpc.id
vpc_id = aws_vpc.main.id
auto_accept = true
tags = {
Name = "VPC-Peering-Main-To-Other"
}}
10. aws ec2 create-vpc-endpoint \
--vpc-id vpc-12345678 \
--service-name com.amazonaws.us-east-1.s3 \
--route-table-ids rtb-12345678 \
--tag-specifications 'ResourceType=vpc-endpoint,Tags=[{Key=Name,Value=MyS3Endpoint}]'
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.main.id
service_name = "com.amazonaws.us-east-1.s3"
route_table_ids = [aws_route_table.private.id]
tags = {
Name = "S3-Endpoint"
}}
[ INTERNET ]
[ Internet Gateway (igw-xxxx) ] vvvvv
[ VPC (10.0.0.0/16) ] vvvvv
├─────────---------─────┐
| |
[ Public Subnet A ] [ Public Subnet B ] vvvvvv
| (10.0.1.0/24) | (10.0.2.0/24)
| us-east-1a | us-east-1b
| |
[ NAT Gateway ] [ NAT Gateway ]
| |
[ Route Table ] [ Route Table ]
| 0.0.0.0/0 → igw | 0.0.0.0/0 → igw
| |
[ Private Subnet A ] [ Private Subnet B ] vvvvvvv
| (10.0.3.0/24) | (10.0.4.0/24)
| us-east-1a | us-east-1b
| |
[ Route Table ] [ Route Table ]
| 0.0.0.0/0 → NAT | 0.0.0.0/0 → NAT
|
[ Security Groups ] - Applied to EC2 instances #attach VPC for instance
[ Network ACLs ] - Applied at subnet level
3) load balancer
i. application(ALB)
aws elbv2 create-load-balancer \
--name my-alb \
--subnets subnet-12345678 subnet-87654321 \
--security-groups sg-12345678 \
--type application \
--scheme internet-facing \
--tags Key=Name,Value=MyALB
ii. network (NLB)
aws elbv2 create-load-balancer \
--name my-nlb \
--subnets subnet-12345678 subnet-87654321 \
--type network \
--scheme internal \
--tags Key=Name,Value=MyNLB
iii.legency (CLB)
aws elb create-load-balancer \
--load-balancer-name my-classic-lb \
--listeners "Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80" \
--subnets subnet-12345678 subnet-87654321 \
--security-groups sg-12345678 \
--tags Key=Name,Value=MyClassicLB
iv:
aws elbv2 create-target-group \
--name my-targets \
--protocol HTTP \
--port 80 \
--vpc-id vpc-12345678 \
--health-check-protocol HTTP \
--health-check-path /health \
--health-check-interval-seconds 30 \
--health-check-timeout-seconds 5 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 2 \
--target-type instance
aws elbv2 create-listener \
--load-balancer-arn arn:aws:elasticloadbalancing:region:account-id:loadbalancer/app/my-alb/1234567890abcdef \
--protocol HTTP \
--port 80 \
--default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:region:account-id:targetgroup/my-targets/1234567890abcdef
aws elbv2 register-targets \
--target-group-arn arn:aws:elasticloadbalancing:region:account-id:targetgroup/my-targets/1234567890abcdef \
--targets Id=i-1234567890abcdef0 Id=i-0abcdef1234567890
4) IAM creatation
i.create user
aws iam create-user --user-name MyUser
resource "aws_iam_user" "example_user" {
name = "example-user"
path = "/"
tags = { Description = "Example IAM user" }
}
ii.create group
aws iam create-group --group-name MyGroup
resource "aws_iam_group" "developers" {
name = "developers"
path = "/"
}
iii. attach to group
aws iam add-user-to-group --user-name MyUser --group-name MyGroup
resource "aws_iam_user_group_membership" "example" {
user = aws_iam_user.example_user.name
groups = [aws_iam_group.developers.name]
}
iv. create role
aws iam create-role --role-name MyRole --assume-role-policy-document file://trust-policy.json
#trust-policy.json
{ "Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
resource "aws_iam_role" "ec2_s3_access_role" {
name = "EC2S3AccessRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
tags = { Environment = "Production" }
}
v. create policy
aws iam create-policy --policy-name MyPolicy --policy-document file://policy.json
#policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::example-bucket"]
}
]
}
resource "aws_iam_policy" "s3_read_only" {
name = "S3ReadOnlyAccess"
description = "Provides read-only access to S3"
path = "/"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:Get*",
"s3:List*"
]
Effect = "Allow"
Resource = "*"
}
]
})
}
vi. attach to policy
aws iam attach-user-policy --user-name MyUser --policy-arn arn:aws:iam::123456789012:policy/MyPolicy
aws iam attach-group-policy --group-name MyGroup --policy-arn arn:aws:iam::123456789012:policy/MyPolicy
aws iam attach-role-policy --role-name MyRole --policy-arn arn:aws:iam::123456789012:policy/MyPolicy
# Attach to user
resource "aws_iam_user_policy_attachment" "user_s3_ro" {
user = aws_iam_user.example_user.name
policy_arn = aws_iam_policy.s3_read_only.arn
}
# Attach to group
resource "aws_iam_group_policy_attachment" "group_s3_ro" {
group = aws_iam_group.developers.name
policy_arn = aws_iam_policy.s3_read_only.arn
}
resource "aws_iam_role_policy_attachment" "s3_full_access" {
role = aws_iam_role.ec2_s3_access_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
vii. create user Access Keys
aws iam create-access-key --user-name MyUser
vii. create instance profile
aws iam create-instance-profile --instance-profile-name MyInstanceProfile
aws iam add-role-to-instance-profile --instance-profile-name MyInstanceProfile --role-name MyRole
resource "aws_iam_instance_profile" "ec2_s3_profile" {
name = "EC2S3InstanceProfile"
role = aws_iam_role.ec2_s3_access_role.name
}
5) create instance
aws ec2 run-instances \
--image-id ami-0abcdef1234567890 \ # Replace with your AMI ID
--instance-type t2.micro \
--key-name MyKeyPair \ # Your existing key pair
--subnet-id subnet-12345678 \ # Your subnet ID
--security-group-ids sg-12345678 \ # Your security group ID with vpc
--count 1 \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=MyInstance}]'
#storage:
--block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":30,"VolumeType":"gp3"}}]'
#network:
--associate-public-ip-address # For public subnet instances
--private-ip-address 10.0.0.10 # Specify private IP
--ipv6-address-count 1 # Assign IPv6 address
#role:
--iam-instance-profile Name=MyInstanceProfile
#userdata
--user-data file://bootstrap.sh \ # Your startup script
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
1) Proficiency in Jenkins and Terraform.
• 2) Strong understanding of AWS parameters (e.g., load balancers, VPCs, IAM, secret managers).vvvvvv
• 3) Experience with observability tools (preferably Datadog or Grafana, but familiarity with others like Splunk is acceptable).
• 4) Intermediate Python development skills.
• 5) Experience with networking protocols and concepts (e.g., TCP/IP, DNS, HTTP/S, VPN, firewalls, routing, NAT),
6) Hands-on programming experience with L2/L3 networking, Ability to debug networking issues
"I’m a DevOps Engineer with 10 years of experience in designing, automating,
and optimizing cloud-native infrastructure and CI/CD pipelines to improve development velocity and system reliability.
My expertise includes
Infrastructure as Code (IaC) using Terraform on AWS to create EC2,VPC,IAM, loadbalancer and secret manager
Containerization with Docker and orchestration via Kubernetes/EKS.
Built end-to-end pipelines with Jenkins, GitLab CI, GitHub Actions, or ArgoCD.
Automated deployments using Ansible, Helm, or Bash/Python scripts.
Monitoring & Observability:
Implemented Prometheus/Loki/Grafana, or Datadog for logging and metrics.
Used PagerDuty/Slack alerts for incident response.
Collaboration & DevOps Culture:
Worked closely with developers to adopt microservices, GitOps, and shift-left testing.
Advocate for SRE practices (SLIs/SLOs, error budgets).
Currently, at TDsecurities, I’ve led initiatives like trading system, which reduced deployment times by 50% and improved system uptime to 99.9%.
I’m excited about this opportunity at Cisco because [mention something specific about their tech stack or challenges],
and I’d love to contribute my skills in [relevant area]."
=============================
"I’m a DevOps Engineer with ten years of experience in automating infrastructure, CI/CD pipelines, and cloud-based deployments.
I have a solid foundation in Linux and networking, along with extensive experience using tools like Jenkins, Ansible, Docker, and Kubernetes.
Recently, I’ve been using Jenkins as my primary platform, focusing on AWS cloud and implementing Infrastructure as Code (IaC) with Terraform
for EC2, VPC, IAM, and more. I’m passionate about improving development workflows, ensuring reliable deployments, and building scalable systems.
Additionally, I have hands-on experience with networking protocols and concepts, including L2/L3 networking,
as well as programming and debugging in this domain.
In my last role, I designed a user-friendly GUI with smart configurations in Jenkins to manage a trading system comprising 50 modules.
This reduced deployment times by 70%. I also implemented automated monitoring solutions using Prometheus, Loki, and Grafana.
Now, I’m seeking an opportunity where I can further optimize DevOps processes and contribute to a strong engineering culture.
"
how to implement docker in jenkins pipline.
1 download code repository
2. login into doker respository
3. docker build image
4. docker push doker image
1. download code repository --> checkout scm
checkout([$class: 'GitSCM',
branches: [name: '*/'+repoBranch](/hqzhang/cloudtestbed/wiki/name:-'*/'+repoBranch),
extensions:[ [$class: 'RelativeTargetDirectory', relativeTargetDir: tmp] ],
userRemoteConfigs: [credentialsId: githubtokenid, url: urlBaseGithub+repoName](/hqzhang/cloudtestbed/wiki/credentialsId:-githubtokenid,-url:-urlBaseGithub+repoName)
])
2. IAC: terraform init/plan/apply
3. static test
SonarQube test
4. CI: docker login with id/pass
5. CI: docker build
6. CI: docker push to Container Registry
7. CD: Deploy to Kubernetes
kubectl apply *.yaml
8. Smoke Tests Verification
9. send email to Notify Team