Kube Academy Scripts building applications for kubernetes - dcasota/photonos-scripts GitHub Wiki
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
log.Println("Starting: Buidling Apps for K8S app")
http.HandleFunc("/",handler)
http.ListenAndServe(":8000",nil)
}
func handler(w http.ResponseWriter, r *http.Request) {
log.Printf("Request received from %s", r.RemoteAddr)
fmt.Fprintf(w, "Building Apps For K8s app says Hi")
}
go build -o server main.go
./server
FROM golang:1.13
COPY main.go .
RUN go build -o /server
CMD ["/server"]
docker build -t dcasota/building-apps .
docker tag <tag id> dcasota/building-apps:0.1
docker login
docker push dcasota/building-apps:0.1
kind create cluster
kubectl get all -A
apiVersion: v1
kind: Pod
metadata:
name: building-apps-pod
labels:
app: kubeacademy
spec:
containers:
- name: building-apps-container
image: dcasota/building-apps:0.1
kubectl apply -f pod.yaml
kubectl get pods
kubectl delete pod building-apps-pod
or
kubectl delete -f pod.yaml
kind get clusters
kind delete cluster
Sometimes it has to be tried twice kind delete cluster
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: building-apps-deploy
labels:
app: kubeacademy
spec:
replicas: 2
selector:
matchLabels:
app:kubeacademy
template:
metadata:
labels:
apps: kubeacademy
spec:
containers:
- name: building-apps-container
image: dcasota/building-apps:0.1
kubectl apply -f deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: building-apps-svc
labels:
app: kubeacademy
spec:
selector:
app: kubeacademy
ports:
- protocol: TCP
port: 80
targetPort: 8000
kubectl apply -f service.yaml
kubectl get pods
As soon as status isn't "ContainerCreating" but "Running", type kubectl get svc
. With the cluster-ip and the name of the first pod do following:
kubectl exec building-apps-deploy-<id> curl <IP>
oder man kann auch einen ping ausführen.
kubectl get pod -o wide
kubectl exec building-apps-deploy-<id> ping <pod IP>
mkdir base mkdir overlays/production -p in base: deployment.yaml, kustomization.yaml, service.yaml kustomization.yaml
resources:
- deployment.yaml
- service.yaml
in overlays/production: kustomization.yaml, replica_count.yaml kustomization.yaml:
namePrefix: prod-
commonLabels:
tier: prod
bases:
- ../../base
patches:
- replica_count.yaml
replica_count.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: building-apps-deploy
spec:
replica: 5
Run kustomize build overlays/production ¦ kubectl apply -f -
.
run helm create build4kube
.
You should get a directory ./build4kube
. Run afterwards
rm ./build4kube/values.yaml
rm -rf ./build4kube/templates/*
cd ./build4kube
edit Chart.yaml:
apiVersion: v2
name: build4kube
description: Kube Academy Demo
type: application
version: 0.1.0
appVersion: 0.1
run
cd templates/
cp ../../base/deployment.yaml .
cp ../../base/service.yaml .
edit deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-building-apps-deploy"
labels:
app: kubeacademy
spec:
replicas: {{ .Values.deploy.replicas }}
selector:
matchLabels:
app: kubeacademy
template:
metadata:
labels:
app: kubeacademy
spec:
containers:
- name: building-apps-container
image: "{{ .Values.deploy.image.repository }}:{{ .Chart.AppVersion }}"
edit service.yaml:
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-building-apps-svc"
labels:
app: kubeacademy
spec:
selector:
app: kubeacademy
ports:
- protocol: TCP
port: 80
targetPort: 8000
run cd ..
.
edit values.yaml:
deploy:
replicas: 2
image:
repository: dcasota/building-apps
run cd ..
.
run helm install --generate-name ./build4kube --dry-run
and check the values.
To run it, now run helm install --generate-name ./build4kube
and check with kubectl get all
.
The picture above shows that the Containercreating has failed. Go back to the 'kind create cluster' step until you do not get any errors (ImagePullBackOff, ErrImagePull, ...).
Let's say there is an updated app version to be packaged. We first update Chart.yaml:
apiVersion: v2
name: build4kube
description: Kube Academy Demo
type: application
version: 0.1.0
AppVersion: 0.2
Now run helm upgrade build4kube-<id> ./build4kube
. Use the id from kubectl get all
output. Verify eg. the version with kubectl describe deploy
.
To apply a rollback, run helm rollback build4kube-<id>
and check the output of kubectl get all
and kubectl describe deploy build4kube-<id>
.
A better way is to edit Charts.yaml, set the previous AppVersion, and rerun helm upgrade.
To package the content, use helm package ./build4kube
. You will get an output build4kube-0.1.0.tgz.
If you set 'version' in Charts.yaml let's say to 0.1.1, the zipped file will be named build4kube-0.1.1.tgz.
Cleanup the demo with helm uninstall build4kube-<id>
and
kubectl delete deployment.apps/build4kube-<id>-building-apps-deploy
, then
kubectl delete service/build4kube-<id>-building-apps-svc
.
Scaffold automizes the build, push and deployment steps.
Copy all the files (main.go, Dockerfile, deployment.yaml, etc.) from the previous 'building-applications-for-kubernetes chapter into a new directory myapp, and started there skaffold init --generate-manifests.
Now run skaffold dev
. In this defecting iteration step we see that the placeholder syntax in service.yaml (and deployment.yaml) hasn't been accepted, and the process exits with status 1.
After having corrected these, the process works.
In addition, in deployment.yaml I've changed the replica value from 1 to 2. Instantly, the additional replica is created.
Also good to know, a modified main.go is detected and the process hooks in by creating a new dockerfile image.
Add for instance const version = "1.1"
and modify the code to echo the version, eg.
log.Println("Starting: Building Apps " + version + " for K8s app")
.
Stop the process (ctrl-c) and restart it with skaffold dev --port-forward
.
Now run curl localhost:4503
. You should see your customized output Building Apps 1.1 for K8s app says Hi
!
workaround:
kind delete cluster
kind create cluster
docker login
cd ./myapp
skaffold dev