Minikube - w3s7y/fluffy-octo-telegram GitHub Wiki

This page aims to provide detailed information about how the cluster is configured beyond the simple minikube start and should provide a runbook to get this up and running locally.



  • python3 If you want to run the project outside of minikube on your machine this becomes a requirement.
  • sqlite3 For interacting with localdb

Minikube initial start

minikube start --nodes 1 --addons ingress \
  --cpus max --memory 12192 --addons metrics-server \
  --extra-config=kubelet.max-pods=1000 -p vets

This will churn away for a few minutes after which you will have a running single node kubernetes cluster running on your local machine, nice one!

General operation

After that you can start/stop/open traffic to the cluster with these commands

minikube start -p vets
minikube stop -p vets

minikube status -p vets 
minikube tunnel -p vets

Querying and Editing the cluster resources

Two ways to do this, one using the kubectl bundled in with minikube:

minikube -p vets kubectl -- get pods --all-namespaces

And the other just using kubectl (if you installed it as part of the optional tooling):

kubectl get pods --all-namespaces

Much nicer :)

Cluster Services

Currently, all the secret setup is manual, but will be automated over time:

Creation of app secrets

Manually for now

kubectl create ns dev-vets
kubectl create secret generic vets-app -n dev-vets \
  --from-literal=DJANGO_SECRET_KEY='<<<<<<<<<<<<<<<<<<< A VERY LONG RANDOM STRING >>>>>>>>>>>>>>>>>>>' \
  --from-literal=POSTGRES_PASSWORD='<<<<< A COMPLEX PASSWORD >>>>>'

Repeat for namespace production-vets as well.

Optional: Create a token in dockerhub and update below with your own creds for pushes to your own dockerhub.

export DOCKER_USERNAME=******
export DOCKER_TOKEN=******
kubectl create secret generic docker-config \
  --from-literal="config.json={\"auths\": {\"\": {\"auth\": \" \
  $(echo -n $DOCKER_USERNAME:$DOCKER_TOKEN|base64)\"}}}"

If you do not do this we can just set push=false on the buildkit step in the CI pipeline later. No worries.

tunnel and host file entries

To use the Ingress rules we create, create some local hostfile entries.

# Host entries for fluffy-octo-telegram testing	dev.vets.internal production.vets.internal
# ci/cd entries	argocd.vets.internal workflows.vets.internal 
# Logging	kibana.vets.internal 
# Monitoring 	grafana.vets.internal alertmanager.vets.internal prometheus.vets.internal 
# user admin / secrets	reset.vets.internal admin.vets.internal vault.vets.internal
# pgadmin pgadmin.production.vets.internal

Start the tunnel

You can now use minikube tunnel -p vets (if not already) to open the ports as needed and get to the ingress controller.

Deploying the cluster services

# Create namespace for argocd resources 
kubectl create ns argocd

# Install argocd, the main CD tool I'm playing with right now 
kubectl apply -n argocd -f

# Get the inital admin password for argocd 
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d

# Check everything is up
kubectl get pods -n argocd
# Install the rest of the cluster services (as argocd Applications)
helm install cluster-services deploy-descriptors/cluster/chart --namespace argocd

You can now hit argocd and monitor the rest of the cluster services deploy from there

Vault setup

Goto Vault and follow its init steps to unseal it and save the creds somewhere safe.

After you do this its state in argo will go Healthy.

Deploying the vets-app

Now you can move over to the Testing page for how to deploy the ci pipelines and the vets apps to the namespaces.