Kube Academy Scripts building cloud native applications - dcasota/photonos-scripts GitHub Wiki

Introduction

Typically how we coded in the past.

code -> build -> app artifact -> deploy -> libs/packages + runtime/middleware + vm guest os

With container platforms like kubernetes, this paradigma changes.

code -> build -> app artifact + libs/packages + runtime/middleware+container os -> deploy -> kubernetes + host os

why: Deploy & run software in a repeatable and reliable manner across computers.

how: packages dependencies with application. Standardize application packaging and runtime environment.

app and all dependencies -> build -> image (deployment artifact) -> run -> container (runtime instance)

Ecosystem:

  • dockerfile
  • jib
  • paketo & cloud native buildpacks
  • ...and more

Leep in mind:

  • consistency
  • speed
  • security
  • governance

Dockerfile: Part 1

A dockerfile is simply a script.

  • dockerfile commands
  • launching the app
  • tagging
  • building context

Let's create a project directory,

mkdir ./app
cd ./app

create a file main.go

package main

import (
  "fmt"
  "os"
)


func main(){
        if len(os.Args[1:]) == 0 {
                fmt.Println("USAGE: Provide a string and we'll say hello!")
        } else {
                arg := os.Args[1]
                fmt.Println("Hello, " + arg)
        }
}

Run go mod init example.com/hello-img, and create a dockerfile.

FROM golang
LABEL maintainer="your e-mail"
WORKDIR /workspace
COPY . .
RUN go install
ENTRYPOINT ["hello"]
CMD ["world"]

Run now docker build . -t hello-img.

We can list all images with docker images.

This doesn't work yet:
docker run hello-img simply shows Hello, world!, and docker run hello-img sunshine shows Hello, sunshine!.

Instead there is an issue:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "hello": executable file not found in $PATH: unknown.
ERRO[0028] error waiting for container: context canceled

Here some samples about tagging.

docker tag hello-img hello-img:1
docker images | grep hello
root@photon-f72a20e6dbfc [ ~/app ]# docker images | grep hello
hello-img               1                                                                  fc2322e40876                                                                                        19 minutes ago      807MB
hello-img               latest                                                             fc2322e40876                                                                                        19 minutes ago      807MB
root@photon-f72a20e6dbfc [ ~/app ]#

Minimize build conext with .dockerignore.

Examine the build context.

root@photon-f72a20e6dbfc [ ~/app ]# docker run --rm --entrypoint /bin/sh hello -c "ls -l"
total 12
-rw-r----- 1 root root 137 Mar 16 17:33 Dockerfile
-rw-r----- 1 root root  38 Mar 13 14:22 go.mod
-rw-r----- 1 root root 220 Mar 13 14:20 hello.go
root@photon-f72a20e6dbfc [ ~/app ]#

Dockerfile: Part 2

How to shutdown an app? We need PID 1 trap/forward, and there are two options: exec or tini

docker exec <container> ps -ef
docker stop <container>
ENTRYPOINT ["sh","-c", "exec hello \"${0} ${*}\""]

Unfortunately I didn't found out how to use tini on Photon OS (tdnf install tini did not work).

How to measure docker. time docker build . -t hello -f Dockerfile --no-cache

Dockerfile: Part 3

Three objectives:

  • Runtime user
  • Image size
  • Inspecting images

How to find out the runtime user.

root@photon-f72a20e6dbfc [ ~/app ]# docker run --rm --entrypoint /bin/sh hello -c "id"
uid=0(root) gid=0(root) groups=0(root)
root@photon-f72a20e6dbfc [ ~/app ]#

Root user in the container isn't secure. We should follow the principle of least privileges.

Reduce the surface of attack by optimizing the size. To eliminate unecessary data, we separate the object and runtime image.

Content Dockerfile-6`.

FROM golang AS build
LABEL maintainer="[email protected]"
WORKDIR /workspace
COPY go.mod go.sum ./
RUN go mod download -json
COPY hello.go .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
    go build -a -installsuffix cfgo -o hello

FROM scratch
COPY --from=build /workspace/hello /
USER 1001
ENTRYPOINT ["/hello"]
CMD ["world"]

docker build . -t hello -f Dockerfile-6 This didn't work yet (Copy failed: file not found in the build conxt or excluded by .dockerignore: stat go.sum: file does not exist).

With docker inspect we can inspect the docker image internal keys. First, we need to install jq with tdnf install jq.

root@photon-f72a20e6dbfc [ ~/app ]# docker inspect hello -f '{{json .}}' | jq keys
[
  "Architecture",
  "Author",
  "Comment",
  "Config",
  "Container",
  "ContainerConfig",
  "Created",
  "DockerVersion",
  "GraphDriver",
  "Id",
  "Metadata",
  "Os",
  "Parent",
  "RepoDigests",
  "RepoTags",
  "RootFS",
  "Size",
  "VirtualSize"
]
root@photon-f72a20e6dbfc [ ~/app ]#

And you can list the subkeys, eg. from the .Config section:

root@photon-f72a20e6dbfc [ ~/app ]# docker inspect hello -f '{{json .Config}}' | jq keys
[
  "AttachStderr",
  "AttachStdin",
  "AttachStdout",
  "Cmd",
  "Domainname",
  "Entrypoint",
  "Env",
  "Hostname",
  "Image",
  "Labels",
  "OnBuild",
  "OpenStdin",
  "StdinOnce",
  "Tty",
  "User",
  "Volumes",
  "WorkingDir"
]
root@photon-f72a20e6dbfc [ ~/app ]#

To inspect each layer, we use dive.

The tool is maintained on https://github.com/wagoodman/dive. It seems that the source can be compiled and used on aarch64 as well.

wget https://github.com/wagoodman/dive/archive/refs/tags/v0.10.0.tar.gz
tar -xzvf v0.10.0.tar.gz
cd dive-0.10.0
make
cd dist
cd dive_linux_amd64/
cp ./dive /usr/bin

Now run docker images, and use the in dive <id>.

Building images for Java Applications with JIB

Jib is a tool for building images from Java applications without using Docker or Dockerfiles. Benefits of Jib: simple

  • does not require a Dockerfile
  • does not require having Docker daaemon or Docker CLI installed fast
  • optimizes for build speed (leverages layering in Docker images)
  • optimizes for application startup reproducible
  • same app + same build metadata -> identical image every time

Build to image registry (does not require Docker daemon or CLI) mvn compile com.google.cloud.tools:jib-maven-plugin:2.2.0:build -Dimage=$REG/$REPO/$IMG:$TAG Build to Docker daemon (default image name $artifactId:$version) mvn compile com.google.cloud.tools:jib-maven-plugin:2.2.0:dockerBuild [ -Dimage=$IMG:$TAG ] Build as tar (default image name $artifactId:$version) mvn compile com.google.cloud.tools:jib-maven-plugin:2.2.0:buildTar docker load --input target/jib-image.tar

Add Plugin to POM: mvn compile jib:build mvn compile jib:dockerBuild mvn compile jib:buildTar mvn package

Jib introduces some performance optimizations. -> "What would a Dockerfile for a Jib-built image look like?"

Reproducible Builds -> Jib guarantees same Image ID even that the build process has been done in different times.

Base Image Configuration & Debug Option: docker run -it --entrypoint /busybox/sh

Container Configuration Tags: Labels, User, Command arguments, Ports, JVM Flags, Agents (src/main/jib), and more

Install Jib on Photon OS:

tdnf install -y openjdk8
curl -J -L -O https://github.com/GoogleContainerTools/jib/archive/refs/tags/v3.2.0-maven.tar.gz
tar -xzvf jib-3.2.0-maven.tar.gz
cd jib-3.2.0-maven
./gradlew jib-cli:installDist
./jib-cli/build/install/jib/bin/jib --version

Building images with Buildpacks: The Cloud Native Buildpacks Project

The Cloud Native Buildpacks project provides a simple though sophisticated approach for building images in a way that can be easily scaled and operationalized.

  • Conceived by Heroku in 2011
  • turn source code or artifact into an application that can be run in the cloud (Primarily on Heroku and Cloud Foundry), also gitlab, knative, deis, dokku and drie
  • language support and product integration: heroku/cloud foundry emphasized language support many community buildpacks provide integration of commodity products
  • Early generations: app -> push -> detect + stage + run

Limitations: Can't run buildpacks off platform

  • Can't deploy to another runtime restricted to heroku, CF
  • Difficult to modularize buildpacks
  • no single API standard

Cloud Native Buildpacks (CNB) CNCF Sandbox project initiated by Pivotal and Heroku in 2018, contributors including Google, Microsoft, and others

  • Well-defined platform-to-buildpack contract to take source code and output OCI images
  • Unified standard to enable ecosystem of platforms and modular buildpacks
  • focus on how to orchestrate builds, not how to run an app

Platform: tool that make buildpack functionality available to end-users Buildpacks: components that provide runtime support for applications

Platform API <-> ? <-> Buildpack API THe implementation of "?" is Lifecycle.

First Build: Platform -> detect, build, lifecycle is executed on a build image, run image (layers) --> export: cache The analyze phase includes metadata -> if cache is consistent -> there is the possibility for restore. export -> push to registriy

Rebuilds: Rebuilds

Rebase phase eg. for OS CVE Patches: Rebase

Cloud Native Buildpacks by Example: cnb-example

Platform Ecosystem: platformecosystem

Building images with Buildpacks: pack, Spring Boot, kpack, and Paketo buildpacks

  • Platform and Buildpacks: pack, kpack, Spring Boot + Paketo Buildpacks
  • Building images
  • Inspecting images and builders
  • rebasing images
  • Using custom buildpacks
  • configuration
  • benefits

As prerequisites you must kind and kubectl must have been installed. Have a look to Setting up the workstation. Make sure that you have a functioning kind/kubectl cluster. Now install the tools. The following recipe works on aarch64 as well.

# pack
git clone https://github.com/buildpacks/pack.git
cd pack
make
make install
/usr/bin/pack --version
# kpack
kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.5.1/release-0.5.1.yaml

The pods have to be on status 'ready'. This can take quite some time.

kubectl get pods --namespace kpack --watch

Hands On with Paketo & Cloud Native Buildpacks

pack set-default-builder gcr.io/paketo-buildpacks/builder:base
docker images
pack build go-img

pack inspect-image go-img
docker run --rm --entrypoint /bin/sh go-img -c "id"
pack rebase go-img --run-imae gcr.io/paketo-buildpacks/run:0.0.17-base-cnb

pack build go-img --buildpack from=builder --buildpack ../sample-buildpack

Java example:

./mvnw spring-boot:build-image -DskipTests

Inspect the java image:

pack inspect-image hello-java:1.0.0 --bom | jq '.local[].name' -r

kpack:

kubectl api-resources --api-group build.pivotal.io
cat kpack/builder.yaml

tag -> destination of the image

more information about the image, builds:

kubectl describe image hello-go | more

reason for the build:

LAST_BUILD=$(kubectl describe image hello-go | grep "Latest Build Ref" | sed 's/.* //')
kubectl get build $LAST_BUILD -o json | jq .metadata.annotations

build log:

logs -namespace kpack-builds -image hello-go

Benefits of Cloud Native Buildpacks:

  • Provides an easy way to build images from source code
  • Enables an ecosystem of platforms and buildpacks for different use cases
  • Minimizes rebuild time and data transfer through use of cache and optimized reuse of layers
  • Provide a simple and nearly-instant 'rebase' method of patching OS vulnerabilities
  • Enables modularization of build functionality
    • Enable simple standrdization and re-use
    • Enables plugging in support for integrations (APMs, scanners, etc...)
  • Enables inspections into the contents of images to provide transparency

Benefits of Paketo Buildpacks:

  • Polyglot support with best practices in each language ecosystem
  • include base images for build and run
  • Actively updated to provide the latest patches and releases in upstream languages runtime and OS packages
  • Reproducible: same app + same build metadata -> identical image every time
  • Modularized build functionality
  • Configuration through buildpack.yml or environment variables

Summary: Using platforms & buildpacks OS Rebase pack, kpack, Spring Boot Custom buildpacks Paketo buildpacks Paketo Configuration Building images buildpack.yml Caching and reusing layers env vars Inspections Benefits Builder, Image

Top takeaways from the Building images Course

Weblink: https://kube.academy/courses/building-images/lessons/top-takeaways-from-the-building-images-course

Building Images:

  • Dockerfile
  • Jib
  • Cloud Native Buildpacks with Paketo Buildpacks
⚠️ **GitHub.com Fallback** ⚠️