Testing - w3s7y/fluffy-octo-telegram GitHub Wiki

Deploying vets into k8s

# Apply the raw manifest which defines the argocd apps
kubectl apply -n argocd -f deploy-descriptors/vets/argocd.yaml

# Check it comes up (race cond on db, need to fix, delete pod if necessary)
kubectl get pods -n dev-vets

This will create the deployment pipelines for two envs of vets app in the cluster which uses the helm chart from the same directory to deploy.

In the current configuration, the dev environment uses the develop branch and the production environment uses master branch as its source for deployment charts.

Deploying CI pipelines

Here is the runbook for getting the CI pipeline into the cluster

# Deploy the templates, these are used by workflows.yaml
kubectl apply -n argo-workflows -f build-descriptors/vets/workflow-templates.yaml

# Deploy the actual 'vets-ci' workflow to argo
kubectl apply -n argo-workflows -f build-descriptors/vets/workflows.yaml

# Deploy the CI events associated with the 'vets-app'
# Note the application events are not deployed here, they are in the main app chart.
kubectl apply -n argo-workflows -f build-descriptors/vets/events.yaml

If you go into argo workflows you should be able to see the pipeline running.

Workflow.png

Structure of the CI pipeline.

As you can probably see from the picture, there are a few steps which are defined:

Branch environments

This is a feature that is looking to get developed. New branches are built and deployed into k8s as part of the CI pipeline.

Django tests (local)

Pretty simplistic, there is a testing folder within the vets django application which houses the tests.

I wrote VetsTestRunner which loads test data once at the start of the run from VetsTestData. This gives a bit of data to get off the ground. This is explicitly defined in settings.py as the test runner for this project.

Env vars for testing

export DJANGO_SECRET_KEY="<< some long string>>>"
export DJANGO_DATABASE="local"
export DJANGO_CACHE="local"
export DJANGO_DEBUG="True" 

Running the tests

Just a call to:

# Grab the deps
pip install -r application/requirements.txt

# Make the initial db migration file and apply it to local DB.
python application/manage.py makemigrations
python application/manage.py migrate

# Time the big elements of the run (db creation) and verbosity == 2 
# Also you can keep the db for later if you wish (this is unreliable at best for me though!)
python application/manage.py test --timing -v 2 --keepdb

Running local server

Make a superuser in the db and fire up the dev server:

python application/manage.py createsuperuser
python application/manage.py runserver

Database interactions / manual validation

python application/manage.py dbshell

Testing vets in the cluster

# Create superuser in dev environment
kubectl exec -it -n dev-vets \
  $(kubectl get pods -n dev-vets | grep -v "NAME" | grep -v "database" | head -1 | awk '{print $1') \
  -- python3 manage.py createsuperuser --username admin --email [email protected]

Fast redeploys of dev deployment descriptors

cd deploy-descriptors/vets/chart
helm upgrade -i -n dev-vets dev-vets . -f values-dev.yaml