Deployment and Delivery - i-on-project/integration GitHub Wiki

Docker

i-on Integration comes ready to deploy inside a Docker container. Containerized deployment is used to improve flexibility and reduce inconsistent behavior or incompatibilities caused by differences in the host environments.

Docker images are built from a Dockerfile that contains the set of necessary steps for its creation. I-On Integration’s Dockerfile executes the following steps:

  1. Stage 1: Run gradle script extractUberJar that:
    • Verifies code style using the project linter ktlint .
    • Runs unit tests.
    • Compiles the source code.
    • Zips project dependency jars.
  2. Stage 2: Setup image
    • Creates a user account to be used by the application.
    • Creates required directories and gives ownership to the new user.
    • Copies the generated files from Stage 1 into the container.

The image’s entry point is a single Java command that runs the app with the UseContainerSupport flag enabled to optimize memory usage for containerized applications.

To reduce the physical size of the produced image a multi-stage build strategy was employed by creating a second stage which only copies the generated artifacts into the image, thus excluding source code from also being transferred.

Deploying locally using Docker Compose

Integration relies on having both a working PostgreSQL database and a Git server to submit files to and these can not only be hard to configure for a first-time setup, but they can also slow regular development.

To tackle these difficulties and streamline the process of running and testing the application we decided to use Docker Compose which enables us to configure our environment using a simple YAML file and deploy it with a single command.

The configured environment deploys the following containerized images:

  • Integration app built from the local Dockerfile.
  • Postgres database, built from the official Alpine-based image.
  • GitBucket git server, built from the main official image.

Docker compose services: app, database, git server

Docker Compose also allows the configuration of dependencies between containers to enforce a pre-determined deployment order. For this scenario, the app container is configured to depend on the database container being in started state, while the git server container is expected to meet a health check to confirm it is ready to accept HTTP requests.

To mirror the production environment more closely, the git server is expected to have an initial repository. This is accomplished by defining a fourth service in Docker Compose, labeled git-setup, that waits for the git server’s healthy status to then create the repository through the git server’s Web API. Git-setup shuts down after executing this task.

Continuous Integration / Continuous Delivery pipelines

Integration’s deployment model relies on three distinct GitHub Actions pipelines. All pipelines test, compile, and build the code, producing a Docker image if successful. Each pipeline has different trigger conditions and specific behavior:

  • The Pull Request pipeline, triggered whenever a Pull Request is opened or updated, simply tests, and builds the code.
  • The Staging Pipeline runs when a push is made to the main branch and deploys the application into the Staging environment (on Heroku).
  • The Production Pipeline builds and deploys the application image into the GitHub Container Registry.

Pipelines are configured via YAML files that are stored and versioned alongside the application’s source code under the /.github/workflows/ directory on the main Git repository.

Deploying to Heroku

The deployment to the staging environment in Heroku is done through a GitHub Actions workflow, triggered whenever a commit is pushed into the main/master branch, which happens after a successful Pull Request is closed. The workflow uses the Heroku Deploy GitHub Action to build and deploy the solution in a Docker container.

Data is persisted using a Heroku Postgres resource. The DATABASE_URL, obtained through the Heroku CLI, despite its name, contains the host, user, and password information to access the Postgres instance. These credentials can change under certain circumstances such as hardware failures, security issues, or automated failover events, thus not allowing their values to be stored in GitHub secrets.

The ephemeral nature of Postgres credentials requires their values be retrieved through the Heroku CLI and passed to the container in build time. This information is parsed using GitHub split-by Action, a GitHub Custom Action used to split strings, and set the parsed data as environment variables in the Docker container.

The use of Heroku CLI through GitHub Actions requires the Heroku API key to be set as a GitHub secret in the repository. To ensure that none of these secrets and passwords are visible in pipeline logs the GitHub Actions command add-mask is used. Database data is reset before each deployment to guarantee all Staging versions work on a clean environment. Once deployment is complete Heroku re-launches the container using the new Docker image.