kaniko use - juancamilocc/virtual_resources GitHub Wiki
In this guide, you will learn how to configure and use Kaniko to build your applications, leveraging its features in combination with Jenkins for a secure and efficient CI/CD process.
Before getting started, make sure you have:
- A running Kubernetes cluster
- Jenkins configured with the Kubernetes plugin
- A Git repository containing your app and a valid Dockerfile
- A container registry (e.g., Docker Hub, GitHub Container Registry)
- Your Jenkins credentials configured (for Git and Registry access)
You can build your own image using Kaniko like this.
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3.21
RUN apk add --no-cache \
bash \
git \
curl \
sed \
ca-certificates \
&& update-ca-certificates
COPY --from=kaniko /kaniko /kaniko
COPY --from=kaniko /kaniko/.docker /kaniko/.docker
ENV PATH=$PATH:/kaniko
ENV DOCKER_CONFIG=/kaniko/.docker/
SHELL ["/bin/sh", "-c"]
NOTE: You can build it yourself or pull it directly from ghcr.io/juancamilocc/builders:kaniko
.
In your Kubernetes cluster, create a Docker registry secret in the Jenkins namespace as follows.
kubectl create -n jenkins secret docker-registry dockerhub-secret \
--docker-username=<YOUR_USER> \
--docker-password=<YOUR_TOKEN> \
--docker-email=<YOUR_EMAIL> \
--docker-server=https://index.docker.io/v1/ # Here you can reference any registry like ecr aws, gcp, azure, oci, among others
Verify, the secret was created correctly.
kubectl -n jenkins get secrets
# NAME TYPE DATA AGE
# dockerhub-secret kubernetes.io/dockerconfigjson 1 14s
Here's a complete Jenkinsfile that builds and pushes a Docker image using Kaniko, then updates a Kubernetes manifest and pushes the change back to Git.
pipeline {
agent {
kubernetes {
cloud 'kubernetes-staging'
yaml """
apiVersion: v1
kind: Pod
metadata:
name: kaniko-pod
namespace: jenkins
spec:
containers:
- name: kaniko
image: ghcr.io/juancamilocc/builders:kaniko
imagePullPolicy: IfNotPresent
tty: true
securityContext:
runAsUser: 0
privileged: true
resources:
limits:
memory: "1Gi"
cpu: "750m"
requests:
memory: "500Mi"
cpu: "256m"
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: jnlp
image: jenkins/inbound-agent
resources:
limits:
memory: "1Gi"
cpu: "512m"
requests:
memory: "500Mi"
cpu: "256m"
volumes:
- name: kaniko-secret
projected:
sources:
- secret:
name: dockerhub-secret
items:
- key: .dockerconfigjson
path: config.json
"""
}
}
environment {
REPOSITORY = '<YOUR_REPOSITORY>'
BRANCH = 'deployment'
MANIFEST = 'deployment.yaml'
IMAGE_TAG = sh(script: 'git rev-parse --short HEAD', returnStdout: true).trim()
EMAIL_USER = sh(script: 'git log -1 --pretty="%ae"', returnStdout: true).trim()
DATE = sh(script: 'TZ="America/Bogota" date "+%Y-%m-%d-%H-%M-%S"', returnStdout: true).trim()
LAST_CHANGE = sh(script: 'git log -1 --name-status --pretty=format:"%h %s"', returnStdout: true).trim()
RETRY_COUNTS = 2
}
stages {
stage('Build and Push image') {
steps {
container('kaniko') {
script {
retry(RETRY_COUNTS) {
try {
sh 'git config --global --add safe.directory $WORKSPACE'
def result = sh(
script: '''
/kaniko/executor --context=`pwd` \
--dockerfile=`pwd`/Dockerfile \
--destination=<YOUR_REGISTRY>:$IMAGE_TAG-$DATE-staging \
--cache=true \
--cache-repo='<YOUR_CACHE_REGISTRY>' \
--snapshot-mode=redo
''',
returnStatus: true
)
echo "Kaniko build and push completed..."
} catch (Exception e) {
echo "Error occurred: ${e.message}"
echo "Retrying..."
error("The stage 'Build and Push image' failed")
}
}
}
}
}
}
stage('Update deployment') {
steps {
container('kaniko') {
script {
retry(RETRY_COUNTS) {
try {
withCredentials([
usernamePassword(
credentialsId: 'github-credentials',
usernameVariable: 'GIT_USERNAME',
passwordVariable: 'GIT_PASSWORD'
)
]) {
sh '''
git config --global user.email "<YOUR_EMAIL>"
git config --global user.name "<YOUR_USER>"
git clone -b $BRANCH --depth 5 https://GIT_USERNAME:$GIT_PASSWORD@$REPOSITORY
cd <YOUR_REPOSITORY_PATH>/deployment/staging
sed -i "s/\\(image:.*:\\).*/\\1$IMAGE_TAG-$DATE-staging/" $MANIFEST
git add $MANIFEST
git commit -m "Trigger Build"
git push origin $BRANCH
'''
// Delete repository
sh 'rm -rf <YOUR_REPOSITORY>'
}
} catch (Exception e) {
echo "Error occurred: ${e.message}"
echo "Retrying..."
error("The stage 'Update Deployment' failed")
}
}
}
}
}
}
}
post {
// Here you can modify or use other notification methods with Google, Teams, Email among others.
// You can check any of them here: https://github.com/juancamilocc/virtual_resources/wiki/notifications-cicd-cycle
success {
slackSend(
channel: 'notifications',
color: '#00FF00',
message: "Build of <YOUR_PROJECT> was successful!",
attachments: [
[
title: "Build of <YOUR_PROJECT> was successful!",
text: "Build details",
fields: [
[title: "Date", value: "${DATE}", short: true],
[title: "Status", value: "Success", short: true],
[title: "Changes made by", value: "${EMAIL_USER}", short: true],
[title: "Last Merge/commit", value: "${LAST_CHANGE}", short: true],
[title: "Project Tag", value: "<PROJECT_TAG>:${IMAGE_TAG}-${DATE}-staging", short: true]
],
footer: "Jenkins",
ts: env.BUILD_TIMESTAMP,
color: "#36a64f"
]
]
)
}
failure {
slackSend (
channel: 'notifications',
color: '#00FF00',
message: "Build of <YOUR_PROJECT> failed!!",
attachments: [
[
title: "Build of <YOUR_PROJECT> failed!",
text: "Build details",
fields: [
[title: "Date", value: "${DATE}", short: true],
[title: "Status", value: "Failure", short: true],
[title: "Changes made by", value: "${EMAIL_USER}", short: true],
[title: "Last Merge/commit", value: "${LAST_CHANGE}", short: true]
],
footer: "Jenkins",
ts: env.BUILD_TIMESTAMP,
color: "#ff0000"
]
]
)
}
}
}
Using Kaniko in your CI/CD pipeline provides a powerful and secure alternative to traditional Docker builds, especially in containerized environments like Kubernetes where the Docker daemon is not available or not preferred.
Key benefits include:
- Daemonless builds: Kaniko runs in userspace, deleting the need for privileged Docker access.
- Integration with Kubernetes-native tools: It works smoothly inside Jenkins agents running on Kubernetes.
- Improved security: Avoids using root or Docker socket access in builds.
- Caching support: Reduces build time by leveraging a layer cache stored in a remote registry.