Ephemeral Cluster Deployment - cloudigrade/cloudigrade GitHub Wiki
Before proceeding, please read Getting Started with Ephemeral Environments (EE)s. It's not very long, and it explains important information about how the ephemeral namespaces work. You must complete all setup instructions in that document to deploy to the ephemeral cluster.
Initial setup
Most of the required first-time setup steps are documented in Getting Started with Ephemeral Environments (EE)s. Complete all instructions in that document before proceeding here.
AppSRE requirements
Your app-interface user must include the ephemeral-users
role. Follow the instructions at Getting Started with Ephemeral Environments (EE)s. See Assigning user roles for more background.
- Example: add ephemeral-users role to user
Your app-interface user must include a public GPG key so AppSRE can securely send you crendentials. Follow the instructions at Generating a GPG key and Adding your public GPG key.
- Example: add public gpg key to user
After your public GPG key has been accepted, you must request credentials to query app-interface which will be used later by bonfire. Follow the instructions at Querying the App-interface to request the app-interface-production-dev-access
credentials.
Shortly after your credentials request merges, you should receive an email from "App SRE team automation" that includes your new credentials encrypted by your public key. Save the encrypted message (including its -----
boundaries), and decrypt it via gpg -d
.
Create a file at ~/.config/bonfire/env
containing the following lines. Set APP_INTERFACE_PASSWORD
's value to the password from the file you just decrypted:
APP_INTERFACE_BASE_URL="app-interface.apps.appsrep05ue1.zqxk.p1.openshiftapps.com"
APP_INTERFACE_BASE_URL_REQUIRES_AUTH="app-interface.devshift.net"
APP_INTERFACE_USERNAME="app-interface-prod-dev-access"
APP_INTERFACE_PASSWORD=
OpenShift ephemeral cluster token
The ephemeral cluster's OpenShift console is: https://console-openshift-console.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/
Request an auth token so you can access the ephemeral cluster from the command-line:
- Go to: https://oauth-openshift.apps.c-rh-c-eph.8p0c.p1.openshiftapps.com/oauth/token/request
- Log in using your GitHub account
- Display Token
- Copy and run the provided
oc login
command in your local shell
The token expires periodically, and you will need to repeat these steps when that happens.
Activate poetry shell
The example commands in this document require bonfire
and ansible
dependencies to be installed and present in your current shell. cloudigrade's poetry virtual environment provides those requirements for you:
cd ~/projects/cloudigrade/
git pull
poetry install
poetry shell
Reserve an ephemeral namespace
See the bonfire cheat sheet if you need a crash course in bonfire
commands.
Reserve a namespace for 8 hours with the appropriate managed Kafka configuration, for example:
NAMESPACE=$(bonfire namespace reserve --duration 8h --pool real-managed-kafka)
oc project ${NAMESPACE}
Deploy cloudigrade
Note: Deploying via bonfire
(or via ansible-playbook
which calls bonfire
) requires Red Hat VPN access. Although some other bonfire
commands may work outside the VPN, bonfire deploy
specifically requires access to Red Hat CEE's internal GitLab host.
Set appropriate values to the following environment variables:
export CLOUDIGRADE_ENVIRONMENT="${USER}-ephemeral"
export QUAY_USER="your-quay-user-name"
export AWS_ACCESS_KEY_ID="my-aws-access-key-id"
export AWS_SECRET_ACCESS_KEY="my-aws-secret-access-key-id"
export AZURE_CLIENT_ID="my-azure-client-id"
export AZURE_CLIENT_SECRET="my-azure-client-secret"
export AZURE_SP_OBJECT_ID="my-azure-sp-object-id"
export AZURE_SUBSCRIPTION_ID="my-azure-subscription-id"
export AZURE_TENANT_ID="my-azure-tenant-id"
latest
images
Using To deploy the latest
cloudigrade and postigrade images from quay.io/cloudservices
, run the following command:
ansible-playbook \
-e namespace="${NAMESPACE}" \
-e env="${CLOUDIGRADE_ENVIRONMENT}" \
deployment/playbooks/manage-clowder.yml
Running that playbook command will deploy cloudigrade to the specified namespace plus postigrade, a PostgreSQL database, and all other dependencies as defined in clowdapp.yaml
such as:
- cloudigrade
- cloudigrade-api
- cloudigrade-beat
- cloudigrade-listener
- cloudigrade-worker
- cloudigrade-db
- postigrade
- postigrade-svc
- sources
- sources-api-svc
- sources-api-redis
- sources-api-sidekiq
- sources-api-db
- sources-monitor
- sources-superkey-worker
- kafka
- rbac
Using other image tags
You may also deploy with other tags for cloudigrade and postigrade images from quay.io/cloudservices
by optionally setting CLOUDIGRADE_IMAGE_TAG
and/or POSTIGRADE_IMAGE_TAG
like:
CLOUDIGRADE_IMAGE_TAG=pr-1247-0bff6d5 POSTIGRADE_IMAGE_TAG=bf02247 ansible-playbook \
-e namespace="${NAMESPACE}" \
-e env="${CLOUDIGRADE_ENVIRONMENT}" \
deployment/playbooks/manage-clowder.yml
And to confirm the requested images were used:
$ oc get pods -o jsonpath='{.items[?(.status.phase=="Running")].spec.containers[?(.name=="cloudigrade-api")].image}' -l pod=cloudigrade-api
quay.io/cloudservices/cloudigrade:pr-1247-0bff6d5
$ oc get pods -o jsonpath='{.items[?(.status.phase=="Running")].spec.containers[?(.name=="postigrade-svc")].image}' -l pod=postigrade-svc
quay.io/cloudservices/postigrade:bf02247
Deploying from local code
For development purposes, it may be useful to build and deploy local copies of cloudigrade and postigrade to the ephemeral cluster.
- Note your project directories, we will use the
~/projects/cloudigrade
and~/projects/postigrade
directories for this example. - We will build and push the projects to your account on quay.io, referenced here below as
${QUAY_USER}
- cloudigrade image:
quay.io/${QUAY_USER}/cloudigrade
- postigrade image:
quay.io/${QUAY_USER}/postigrade
- cloudigrade image:
If you are using docker
instead of podman
, you may need to create a symlink because our playbook expects to use the podman
binary.
ln -s /usr/local/bin/docker /usr/local/bin/podman
Log in to your quay.io account as ${QUAY_USER}
so that the playbook can push images and tags.
podman login quay.io
You next need to add your pull secret to the Ephemeral namespace and include it in the ClowdEnvironment for that namespace so Clowder can pull the images from your repo. Fortunately, this playbook will do the hard part of applying it for you, all you'll need to do is pull down the secret file and have the path to it ready.
- Login to your quay.io account
- Using the username button at the top right, navigate to Account Settings
- Click on CLI Password: Generate Encrypted Password and Enter your password
- Click on Kubernetes Secret
- View and save your ${QUAY_USER}-secret.yml file locally.
Deploy cloudigrade and postigrade using new images based on your current local code:
ansible-playbook \
-e push_local_state=present \
-e pull_secret_path="~/Downloads/${QUAY_USER}-secret.yml" \
-e quay_user="${QUAY_USER}" \
-e namespace="${NAMESPACE}" \
-e env="${CLOUDIGRADE_ENVIRONMENT}" \
-e cloudigrade_deployment_host=local \
-e cloudigrade_deployment_repo="~/projects/cloudigrade" \
-e postigrade_deployment_host=local \
-e postigrade_deployment_repo="~/projects/postigrade" \
deployment/playbooks/manage-clowder.yml
The playbook will build and push your images based on the state of your repo, setup the pull secret, and deploy cloudigrade/postigrade.
With the above instructions, please keep in mind to periodically prune older tagged images locally and on quay.io.
Clean up deployment
To clean up your deployment and remove your deployed resources:
ansible-playbook \
-e clowder_state=absent \
-e namespace=${NAMESPACE} \
deployment/playbooks/manage-clowder.yml
Once cloudigrade is cleaned up, you will see only the following pods in the namespace:
oc get pods
NAME READY STATUS RESTARTS AGE
env-ephemeral-17-aa6cc032-connect-5c9dd6f6c4-dsmtt 1/1 Running 0 179m
env-ephemeral-17-aa6cc032-entity-operator-7cc8fbcf9-97v6d 3/3 Running 0 179m
env-ephemeral-17-aa6cc032-kafka-0 1/1 Running 0 3h
env-ephemeral-17-aa6cc032-zookeeper-0 1/1 Running 0 3h
env-ephemeral-17-featureflags-6688b8d6f7-jvw7s 1/1 Running 2 3h
env-ephemeral-17-minio-64bbf75b96-d6nqn 1/1 Running 0 179m
featureflags-db-559b99b675-wnh6w 1/1 Running 1 179m
prometheus-operator-5cfb5694f5-tpk8d 1/1 Running 0 3h1m
After cleaning up, you can re-deploy cloudigrade in the same namespace for further development or testing, or you should release the namespace.
Generate randomized synthetic customer and activity data
Edit your deployment's ClowdApp to set ENABLE_SYNTHETIC_DATA_REQUEST_HTTP_API
to true
.
oc get ClowdApp/cloudigrade -o yaml > /tmp/cloudigrade.yaml
vim /tmp/cloudigrade.yaml
# change "false" to "true" for ENABLE_SYNTHETIC_DATA_REQUEST_HTTP_API in the api deployment's env list
oc apply -f /tmp/cloudigrade.yaml
After OpenShift has recreated pods with the updated template, port-forward localhost to one of the cloudigrade-api pods:
CLOUDIGRADE_API_POD_NAME=$(oc get pods -o jsonpath='{.items[*].metadata.name}' -l pod=cloudigrade-api | awk '{print $1}')
oc port-forward pods/"${CLOUDIGRADE_API_POD_NAME}" 8000:8000 2>&1 > /dev/null &
Use the internal API to request synthetic data to be created:
http localhost:8000/internal/api/cloudigrade/v1/syntheticdatarequests/ cloud_type=aws
If you don't know what options are available for a synthetic data request, you can use http options
to get a description of the API. You may want to filter the output to include only fields that are not read-only:
http options localhost:8000/internal/api/cloudigrade/v1/syntheticdatarequests/ | jq '.actions.POST | with_entries(select(.value | (.read_only == false)))'
Wait for the workers to complete the tasks for synthesizing your data. The is_ready
field for your request should become true
when all is ready. Periodically check readiness like:
http localhost:8000/internal/api/cloudigrade/v1/syntheticdatarequests/1/ | jq .is_ready
Each synthetic data request creates a user to own all the related synthetic data, and you may make HTTP requests to localhost:8000
using that user's username
as an identity's account_number
. See also REST API Example Usage. For example:
IDENTITY_ACCOUNT_NUMBER=$(http localhost:8000/internal/api/cloudigrade/v1/users/$(http localhost:8000/internal/api/cloudigrade/v1/syntheticdatarequests/1/ | jq -r .user)/ | jq -r .username)
IDENTITY=$(echo '{"identity": {"account_number": "'"${IDENTITY_ACCOUNT_NUMBER}"'","user": {"is_org_admin": true}}}' | base64)
http :8000/api/cloudigrade/v2/sysconfig/ X-RH-IDENTITY:"${IDENTITY}"
http :8000/api/cloudigrade/v2/accounts/ X-RH-IDENTITY:"${IDENTITY}"
http :8000/api/cloudigrade/v2/concurrent/ X-RH-IDENTITY:"${IDENTITY}"