Helm Charts - department-of-veterans-affairs/abd-vro GitHub Wiki

VRO uses Helm Charts to specify a deployment to LHDI. The charts are located in the helm folder.

GitHub Action workflows call helm to deploy to various LHDI environments -- see CI CD Workflows.

Configurable Settings

The values.yaml contains the default settings for the dev environment. Other values-for-*.yaml files override some settings in the values.yaml. For example, values-for-prod.yaml overrides the size of the persistent volumes for the prod environment.

Before adding a Helm configuration, read Configuration-settings#guidelines-for-placing-settings.

The values are referenced in charts and the _helper.tpl, which is used by charts to reduce repetition and consolidate common settings into a single location. All containers share a common set of configuration settings, so changes can be consistently and easily applied to all containers.

Additionally, each chart has it's own values.yaml file, which can be overridden by using the setting global.<chartName>.someSetting, where <chartName> corresponds to the name within the corresponding Chart.yaml file. For example, overriding the value of global.rabbitmq-chart.imageTag would change the imageTag value used in the rabbitmq subchart.

See the Update deployment GitHub Action workflow for an example of how settings are overridden. Using this mechanism, sets of containers can be enabled and disabled.

Persistent Volumes

LHDI offers 2 types of persistent volumes (PVs):

  • EFS
    • pros: allows mounting from any container; any container user can write to it but all files are owned by the same user (as determined by LHDI when the PV is provisioned)
    • cons: unmodifiable folder ownership (Slack)
  • EBS
    • pros: allows changing folder ownership
    • cons: can only be mounted by containers in the same Kubernetes node; only supports ReadWriteOnce, which implies:

      "the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node."

VRO creates 2 PVs for different purposes:

  • tracking: EFS volume mounted by the rabbitmq and app containers to tracking incoming requests for diagnostics and recovery
  • postgres data (pgdata): EBS volume mounted by the postgres container to retain DB data between redeploys. The postgres container requires that the data folder be owned by the postgres user, hence EFS cannot be used.

The VRO Console mounts these PVs for diagnostics.

We do not have visibility into the PVs themselves -- LHDI manages the PVs. We must specify persistent volume claims (PVCs) in order to get PVs.

Resources

StatefulSet

Since the pgdata is an EBS volume, the postgres and console containers must run in the same node as the EBS. To satisfy this constraint, a StatefulSet is created for the postgres container and a podAffinity is applied to the console container.

When updating a deployment (using helm upgrade), StatefulSets causes an error. A workaround is implemented to avoid this error.

Secrets

See Secrets Vault

⚠️ **GitHub.com Fallback** ⚠️