Minikube - w3s7y/fluffy-octo-telegram GitHub Wiki

This page aims to provide detailed information about how the cluster is configured beyond the simple minikube start and should provide a runbook to get this up and running locally on a k8s cluster.

General flow of the build

Obviously check the individual steps but the SUPER high level steps of getting this minikube cluster running are described here:

  • Download & install pre-req software.
  • Edit your local hostfile with vets.internal addresses.
  • Use minikube start to create & boot up a k8s cluster.
  • Deploy argoCD & cluster-services
  • Start the tunnel minikube -p vets tunnel
  • Initialise Hashicorp vault (create root keys & unseal)
  • Run vault terraform to create auth backends and secrets.
  • Deploy vets-app environments (dev & production).

Pre-requisites

Optional

  • python3 If you want to run the project outside of minikube on your machine this becomes a requirement.
  • sqlite3 For interacting with localdb

Host file entries

To use the Ingress rules we create, create some local hostfile entries.

# Host entries for fluffy-octo-telegram testing
127.0.0.1	dev.vets.internal production.vets.internal
# ci/cd entries
127.0.0.1	argocd.vets.internal workflows.vets.internal 
# Logging
127.0.0.1	kibana.vets.internal 
# Monitoring
127.0.0.1 	grafana.vets.internal alertmanager.vets.internal prometheus.vets.internal 
# user admin / secrets
127.0.0.1	reset.vets.internal admin.vets.internal vault.vets.internal
# pgadmin
127.0.0.1   pgadmin.dev.vets.internal pgadmin.production.vets.internal

Minikube initial start

minikube start --nodes 1 --addons ingress \
  --cpus max --memory 12192 --addons metrics-server \
  --extra-config=kubelet.max-pods=1000 -p vets

This will churn away for a few minutes after which you will have a running single node kubernetes cluster running on your local machine, nice one!

N.B. you may want to tune the memory of the

General operation

After that you can start/stop/open traffic to the cluster with these commands

minikube start -p vets
minikube stop -p vets

minikube status -p vets 
minikube tunnel -p vets

Querying and Editing the cluster resources

Two ways to do this, one using the kubectl bundled in with minikube:

minikube -p vets kubectl -- get pods --all-namespaces

And the other just using kubectl (if you installed it as part of the optional tooling):

kubectl get pods --all-namespaces

Much nicer :)

Cluster Services

# Create namespace for argocd 
kubectl create ns argocd

# Install argocd, the main CD tool I'm playing with right now 
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Get the inital admin password for argocd 
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d

# Check everything is up
kubectl get pods -n argocd
 
# Install the rest of the cluster services (as argocd Applications)
helm install cluster-services deploy-descriptors/cluster/chart --namespace argocd

Give this 5 mins or so to warn up all the pods and make sure you have started

minikube -p vets tunnel

In another terminal which will route the localhost entries into the ingress rules of the cluster.

We can then start configuring the Vault service as this needs to be populated before we deploy the vets-apps

Vault first initialisation

Goto Vault and create the initial root keys for your vault installation.

Just create one unseal key and save the .json file it dumps for you somewhere very safe. You will need it everytime you start the cluster to unseal the vault.


Creation of app secrets

Manually for now

kubectl create ns dev-vets
kubectl create secret generic vets-app -n dev-vets \
  --from-literal=DJANGO_SECRET_KEY='<<<<<<<<<<<<<<<<<<< A VERY LONG RANDOM STRING >>>>>>>>>>>>>>>>>>>' \
  --from-literal=POSTGRES_PASSWORD='<<<<< A COMPLEX PASSWORD >>>>>'

Repeat for namespace production-vets as well.

This will be moved to a vault implementation of secrets soon.

Optional secrets

Create a token in dockerhub and update below with your own creds for pushes to your own dockerhub.

export DOCKER_USERNAME=******
export DOCKER_TOKEN=******
kubectl create secret generic docker-config \
  --from-literal="config.json={\"auths\": {\"https://index.docker.io/v1/\": {\"auth\": \" \
  $(echo -n $DOCKER_USERNAME:$DOCKER_TOKEN|base64)\"}}}"

If you do not do this we can just set push=false on the buildkit step in the CI pipeline later. No worries.

Start the tunnel

You can now use minikube tunnel -p vets (if not already) to open the ports as needed and get to the ingress controller.

You can now hit argocd and monitor the rest of the cluster services deploy from there

Vault setup

Goto Vault and follow its init steps to unseal it and save the creds somewhere safe.

After you do this its state in argo will go Healthy.

Adding roles to argo-workflows default user

In order for argo-workflows to be able to use output parameters it needs to be able to patch pods in the namespace. This should really be a proper role assigned to only allow it what it needs to do.

kubectl create rolebinding default-admin \
  --clusterrole=admin --serviceaccount=argo-workflows:default -n argo-workflows

Deploying the vets-app

Now you can move over to the Testing page for how to deploy the ci pipelines and the vets apps to the namespaces.