configuration cycle cicd - juancamilocc/virtual_resources GitHub Wiki
In this guide, you will learn how to deploy Jenkins and ArgoCD to establish a CI/CD cycle in a multi-node Kubernetes environment.
Below, you can see the general architecture.
As shown in the previous diagram, we will deploy the tools in the first cluster, setting up and ensuring communication with the other cluster. This setup allows us to use Jenkins for building and pushing, while ArgoCD handles the deployment, for the applications across both clusters.
NOTE: This guide assumes that the user already has the kubectl tool installed on their local machine. If you have any doubts about how to install it, go here.
First, let's create a namespace where we will contain all resources related to Jenkins in our cluster.
kubectl create ns jenkins
We will create a ServiceAccount and a ClusterRole with the necessary permissions, which we will call jenkins-sa.yaml
.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-admin
namespace: jenkins
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-admin
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: jenkins
Now, let's create a PersistentVolumeClaim (PVC) to ensure data persistence for Jenkins in case of restarts. We will call this file jenkins-pvc.yaml
.
For local environment.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: jenkins
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: jenkins
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
NOTE: For cloud providers, in many cases, it is sufficient to create the PVC, and it will be automatically claimed. More information can be found here. AWS, GCP, Azure, OCI.
For cloud environment.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: <storageClassName-cloud-proovider>
volumeMode: Filesystem
We will create a deployment manifest, which we will call jenkins-deployment.yaml
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-server
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
restartPolicy: Always
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
image: jenkins/jenkins:lts
securityContext:
runAsUser: 0
privileged: true
resources:
limits:
memory: "2Gi"
cpu: "750m"
requests:
memory: "1Gi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
We will create its service, which we will call jenkins-svc.yaml
.
For local environment.
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
namespace: jenkins
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 32000
For cloud environment.
apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: jenkins
spec:
ports:
- name: jenkins-server
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: jenkins-server
sessionAffinity: None
type: ClusterIP
If you are in a cloud environment, you can assign an Ingress in conjunction with your domain. We will create its Ingress, which we will call jenkins-ing.yaml
.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
spec:
ingressClassName: nginx
rules:
- host: <address-domain>
http:
paths:
- backend:
service:
name: jenkins-server
port:
number: 8080
pathType: ImplementationSpecific
tls:
- hosts:
- <address domain>
secretName: <tls-cert>
Finally, apply all the manifests in the cluster.
kubectl -n jenkins apply -f jenkins-sa.yaml
kubectl -n jenkins apply -f jenkins-pvc.yaml
kubectl -n jenkins apply -f jenkins-deployment.yaml
kubectl -n jenkins apply -f jenkins-svc.yaml
kubectl -n jenkins apply -f jenkins-ing.yaml
Once the above is done, we can verify the jenkins server in the cluster.
kubectl -n jenkins get pods
#NAME READY STATUS RESTARTS AGE
#jenkins-server-578ddfdf9c-v6r4g 1/1 Running 0 145m
Now, let's access Jenkins in the browser. You can either go to the domain address or use port forwarding.
kubectl -n jenkins port-forward svc/jenkins-service 8080:8080
# Forwarding from 127.0.0.1:8080 -> 8080
# Forwarding from [::1]:8080 -> 8080
# Handling connection for 8080
In this guide, we will use port forwarding. If you have assigned an Ingress with a URL address, simply go to it, and it will display the following interface.
To obtain the initial password, we do the following:
kubectl -n jenkins exec -it jenkins-server-578ddfdf9c-v6r4g cat /var/jenkins_home/secrets/initialAdminPassword
# kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
# 8798a60557744a6dbe2b49f3cbae1d75
The password we need is 8798a60557744a6dbe2b49f3cbae1d75
. Click on Suggested Plugins
and create an admin user with a new password.
Once you are in the Jenkins dashboard, navigate to Manage Jenkins > Manage Nodes > Main > Configure
. In the Number of Executors option, set the value to 0
. This means that no processes will execute on our Jenkins server, and instead, the processes will run in ephemeral pods.
Return to Manage Jenkins
. In the Manage Plugins option, under the Available Plugins
section, search for Kubernetes and install it.
Now, we will set up the Kubernetes nodes that Jenkins will have access to in a multi-node Kubernetes environment. We will have two clusters: the first will contain the DevOps tools, Jenkins and ArgoCD, and will be treated as a staging environment, while the second cluster will serve as the production environment. We will manage everything from the first cluster, simulating a typical scenario.
First, let's set up the staging environment.
Go back to Manage Jenkins
. In the Clouds section, click on New Cloud
, select Kubernetes, and provide a name.
Fill in the required fields. Jenkins will detect the settings automatically; we just need to specify the namespace
and then verify by clicking on Test Connection
. You should see a message similar to this.
Now, for the other cluster node, go to the cluster and obtain the IP address.
kubectl config view -o=jsonpath='{.clusters[0].cluster.server}'
# https://<addres-production-cluster>
And we give permissions.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-admin
namespace: jenkins
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-admin
namespace: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-admin
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: jenkins
kubectl apply -f <permission-file.yaml>
In addition, we need a token for authenticating against the cluster from the Jenkins server in the first cluster.
We must create the following secret.
apiVersion: v1
kind: Secret
metadata:
name: token-secret
namespace: jenkins
annotations:
kubernetes.io/service-account.name: jenkins-admin
type: kubernetes.io/service-account-token
If we inspect the generated secret, it will show us something like this.
kubectl -n jenkins get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <ca.crt-content-base64>
namespace: amVua2lucw==
token: <token-content-base64>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"jenkins-admin"},"name":"token-secret","namespace":"jenkins"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: jenkins-admin
kubernetes.io/service-account.uid: ec583b5e-e894-43f5-9100-85c6a1f27893
creationTimestamp: "2024-01-03T13:51:18Z"
labels:
kubernetes.io/legacy-token-last-used: "2024-01-04"
name: token-secret
namespace: jenkins
resourceVersion: "28946857"
uid: 4d0fc065-e1d4-4720-a372-6542e1c65bbd
type: kubernetes.io/service-account-token
From there, we can obtain the token
and Server certificate key
, we just need to decode them as follows.
echo "<token-content-base64>" | base64 --decode
echo "<ca.crt-content-base64>" | base64 --decode
Now, we need to go to the main interface, click on the username
, choose Credentials
, then select Stores from pattern
, global
, and click on Add Credentials
.
Here, we will create the credentials for the container registry, GitHub repository, and access token for the production (second cluster). As an example, we will use Docker Hub. If you want to learn how to authenticate with cloud providers, click on AWS, GCP, Azure, OCI.
To specify the Docker Hub and GitHub credentials, select the option Username with password
, while for the token, choose Secret text
. In the Secret field, enter the decoded token that we obtained earlier.
Return to the cloud configuration and repeat the same process for the second cluster. Choose the access token and click on Test Connection
to verify that it works correctly.
NOTE: You must set de address that you assigned in the ingress in the field Kubernetes URL
.
Finally we must change te Jenkins URL to detect correctly our Kubernetes agents. Go to Manage Jenkins > System > Jenkins URL
, set http://jenkins-service.jenkins.svc.cluster.local:8080/
, as follows.
First, we will deploy as follows.
kubectl create ns argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
# customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
# customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
# serviceaccount/argocd-application-controller created
# serviceaccount/argocd-applicationset-controller created
# serviceaccount/argocd-redis created
# .
# .
# .
To access the dashboard, we can do a port-forward.
kubectl -n argocd port-forward svc/argocd-server 8081:443
Or also you can assign it to a domain using a ingress, as follows.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
name: argocd-client-ingress
namespace: argocd
spec:
ingressClassName: nginx
rules:
- host: <address-domain>
http:
paths:
- backend:
service:
name: argocd-server
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- <address-domain>
secretName: <secret-name-tls-cert>
Now, we will download the argoCD CLI.
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
argocd version --short
# argocd: v2.10.6+d504d2b
We must authenticate in the ArgoCD instance. To check your initial password, we can do te following.
argocd -n argocd admin initial-password
# OYUdtXCU8CIZFjH0
# This password must be only used for first time login. We strongly recommend you update the password using `argocd account update-password`.
If you have associated an ingress, you can do this.
kubectl -n argocd get ing
#NAME CLASS HOSTS ADDRESS PORTS AGE
#argocd-client-ingress nginx <address-domain> <ip-address> 80, 443 2d1h
#.
#.
#.
argocd login <address-domain>
#Username: admin
#Password:
#'admin:login' logged in successfully
#Context '<address-domain>' updated
But, if you are using a port-forward.
argocd login 127.0.0.1:8081
# WARNING: server certificate had error: tls: failed to verify certificate: x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs. Proceed insecurely (y/n)? y
# Username: admin
# Password:
# 'admin:login' logged in successfully
# Context '127.0.0.1:8081' updated
Now, we must change the password for security.
argocd account update-password
# *** Enter password of currently logged in user (admin):
# *** Enter new password for user admin:
# *** Confirm new password for user admin:
# Password updated
# Context '127.0.0.1:8081' updated
This way, we can associate the Kubernetes cluster nodes with our ArgoCD server.
kubectl config get-contexts -o name
#context-cluster1
#context-cluster2
argocd cluster add context-cluster1 --grpc-web
# WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `context-cluster1` with full cluster level privileges. Do you want to continue [y/N]? y
# INFO[0002] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
# INFO[0002] ClusterRole "argocd-manager-role" updated
# INFO[0002] ClusterRoleBinding "argocd-manager-role-binding" updated
# Cluster '<ip-address-cluster1>' added
argocd cluster add context-cluster2 --grpc-web
# WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `context-cluster2` with full cluster level privileges. Do you want to continue [y/N]? y
# INFO[0002] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
# INFO[0002] ClusterRole "argocd-manager-role" updated
# INFO[0002] ClusterRoleBinding "argocd-manager-role-binding" updated
# Cluster '<ip-address-cluster2>' added
NOTE: By default, ArgoCD has a sync time of 3 minutes. You can modify this value to 1 minute by doing the following:
kubectl -n argocd edit cm argocd-cm
#You must add this content
data:
timeout.reconciliation: 60s
Finally is neccesary to restart the repo-server pod.
kubectl -n argocd rollout restart deploy argocd-repo-server
Now, you can acces to your ArgoCD dashboard.
Now, you know how to deploy a Jenkins and ArgoCD server in a multi-node Kubernetes environment for a basic CI/CD cycle. In the next guide Deploying our first applications in Kubernetes with a CI/CD cycle using Jenkins and ArgoCD, we will implment our first application.