Pipeline Controller - 22Acacia/sossity GitHub Wiki

The pipeline controller or pipeline manager project is the aggregation and execution step of this system. This is a github project so that all changes are versioned but there is no test or build phase. There are two phases:

  • configure environment
  • deploy

configure environment

Domain specifiy projects workflow Chart

In this phase, artifacts will be found and downloaded either by interrogating the sossity configuration file via sossity-version-parser, an open source 22acacia application, or because they've been hardcoded into the configure_host.sh script. This phase also performs google cloud authorization using the GOOGLE_CREDENTIALS environment variable.

The sossity config file has sections for specifying the source location of jar files. Sossity-version-parser reads the config file, extracts those locations and assembles a config file that is then consumed by configure_host.sh as the driver for downloading files. This process pushes all configuration into a single config file which makes reasoning about what is running easy.

The hardcoded artifacts for acquisition are all open source resources of one form or another. The majority of them are 22acacia written and maintained. The non-22acacia artifacts are terraform main and a couple of google cloud command line tools. The 22acacia artifacts are sossity, sossity-version-parser and a few terraform plugins.

The authorization portion is for downloading artifacts from google storage and for creating a file that contains authorization information in a known location for the terraform run.

deploy

Domain specifiy projects workflow Chart

The deploy process for Sossity as a system is a two phase process. First, sossity is executed with the configuration file. Sossity validates the config file and generates a terraform configuration file as output. Second, Terraform is called with the generated configuration file. The terraform execution is what creates, modifies and destroys cloud infrastructure. After the terraform execution is complete, the state of terraform is saved to atlas.hashicorp.com. The saving of the state file is important as terraform needs the actual ids of resources to be sure it is managing them. Terraform is cautious by nature and will not take ownership of existing resources so its important to always save the terraform state. Subsequent deploy runs will download the state file first and then begin its terraform execution.

Saving the terraform state makes it possible to view and modify the state of the system outside of the pipeline management job. See terraform remote configuration for details on how to do this.