Helm Charts - bcgov/eagle-dev-guides GitHub Wiki
Helm Charts
This document describes the Helm chart implementation for eagle-public, eagle-admin, and eagle-api deployments on OpenShift.
Overview
All Eagle applications (eagle-public, eagle-admin, and eagle-api) use Helm charts for Kubernetes deployment management. Helm provides:
- Version-controlled infrastructure - Chart changes tracked in Git
- Environment-specific configuration - Values files for dev/test/prod
- Declarative deployments - Desired state defined in YAML
- Easy rollbacks -
helm rollbackto previous revisions
Note: eagle-api was migrated from DeploymentConfig to Helm in February 2026 as part of Node.js 22 upgrade and security hardening.
Chart Structure
helm/eagle-{public|admin}/
├── Chart.yaml # Chart metadata (name, version)
├── values.yaml # Default values
├── values-dev.yaml # Dev environment overrides
├── values-test.yaml # Test environment overrides
├── values-prod.yaml # Prod environment overrides
└── templates/
├── _helpers.tpl # Template helper functions
├── deployment.yaml # Deployment resource
├── service.yaml # Service resource
├── route.yaml # OpenShift Route resource
└── hpa.yaml # HorizontalPodAutoscaler (optional)
Values Reference
Common Values (values.yaml)
# Application name
nameOverride: ""
fullnameOverride: ""
# Container image configuration
image:
repository: image-registry.openshift-image-registry.svc:5000/6cdc9e-tools/eagle-public
tag: dev
pullPolicy: Always
# Replica count
replicaCount: 2
# Resource limits
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
# Service configuration
service:
type: ClusterIP
port: 8080
# Route configuration (OpenShift)
route:
enabled: true
host: "" # Auto-generated if empty
path: / # URL path prefix
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
# Liveness and readiness probes
probes:
liveness:
path: /health
initialDelaySeconds: 10
periodSeconds: 10
readiness:
path: /health
initialDelaySeconds: 5
periodSeconds: 5
MongoDB Configuration (eagle-api only)
eagle-api includes MongoDB deployment when enabled:
mongodb:
enabled: false # Default false for local dev
image:
repository: image-registry.openshift-image-registry.svc:5000/6cdc9e-tools/mongodb-44
tag: latest
existingSecret: eagle-api-mongodb
persistence:
existingClaim: eagle-api-mongodb-data # Adopts existing PVC
resources:
limits:
cpu: 500m
memory: 1Gi
Health Probes: Use tcpSocket (NOT exec) to prevent zombie processes:
readinessProbe:
tcpSocket:
port: 27017
initialDelaySeconds: 5
Environment Overrides
Dev (values-dev.yaml)
image:
tag: dev
replicaCount: 2
# eagle-api: Enable MongoDB in OpenShift
mongodb:
enabled: true
resources:
limits:
cpu: 250m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
route:
host: eagle-dev.apps.silver.devops.gov.bc.ca
Test (values-test.yaml)
image:
tag: test
replicaCount: 2
resources:
limits:
cpu: 250m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
route:
host: eagle-test.apps.silver.devops.gov.bc.ca
Prod (values-prod.yaml)
image:
tag: prod
replicaCount: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 250m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 6
targetCPUUtilizationPercentage: 70
route:
host: projects.eao.gov.bc.ca
Common Operations
Install/Upgrade
# Deploy to dev
helm upgrade --install eagle-public ./helm/eagle-public \
--namespace 6cdc9e-dev \
--values ./helm/eagle-public/values-dev.yaml \
--set image.tag=dev \
--wait --timeout=5m
# Deploy to test with specific version
helm upgrade --install eagle-public ./helm/eagle-public \
--namespace 6cdc9e-test \
--values ./helm/eagle-public/values-test.yaml \
--set image.tag=v1.2.3 \
--wait --timeout=5m
Check Status
# List releases
helm list -n 6cdc9e-dev
# Get release info
helm status eagle-public -n 6cdc9e-dev
# Get deployed values
helm get values eagle-public -n 6cdc9e-dev
Rollback
# List revision history
helm history eagle-public -n 6cdc9e-dev
# Rollback to previous revision
helm rollback eagle-public -n 6cdc9e-dev
# Rollback to specific revision
helm rollback eagle-public 3 -n 6cdc9e-dev
Dry Run / Debug
# Preview what would be deployed
helm upgrade --install eagle-public ./helm/eagle-public \
--namespace 6cdc9e-dev \
--values ./helm/eagle-public/values-dev.yaml \
--dry-run --debug
First-Time Helm Adoption
If the environment has existing resources (Service, Route) not managed by Helm, you must adopt them before Helm can manage them.
Symptoms
Error: Unable to continue with install: Service "eagle-admin" in namespace
"6cdc9e-dev" exists and cannot be imported into the current release:
invalid ownership metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"
Solution
- Label and annotate existing Service:
oc annotate service eagle-admin -n 6cdc9e-dev \
meta.helm.sh/release-name=eagle-admin \
meta.helm.sh/release-namespace=6cdc9e-dev \
--overwrite
oc label service eagle-admin -n 6cdc9e-dev \
app.kubernetes.io/managed-by=Helm \
app.kubernetes.io/name=eagle-admin \
app.kubernetes.io/instance=eagle-admin \
--overwrite
- Label and annotate existing Route:
oc annotate route eagle-admin -n 6cdc9e-dev \
meta.helm.sh/release-name=eagle-admin \
meta.helm.sh/release-namespace=6cdc9e-dev \
--overwrite
oc label route eagle-admin -n 6cdc9e-dev \
app.kubernetes.io/managed-by=Helm \
app.kubernetes.io/name=eagle-admin \
app.kubernetes.io/instance=eagle-admin \
--overwrite
- Run Helm install:
helm upgrade --install eagle-admin ./helm/eagle-admin \
--namespace 6cdc9e-dev \
--values ./helm/eagle-admin/values-dev.yaml \
--set image.tag=dev \
--wait --timeout=5m
- Delete legacy DeploymentConfig (after verifying Helm deploy works):
oc delete deploymentconfig eagle-admin -n 6cdc9e-dev
Template Details
Deployment Template
Key features of the deployment template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "eagle.fullname" . }}
labels:
{{- include "eagle.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "eagle.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "eagle.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.port }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
httpGet:
path: {{ .Values.probes.liveness.path }}
port: {{ .Values.service.port }}
readinessProbe:
httpGet:
path: {{ .Values.probes.readiness.path }}
port: {{ .Values.service.port }}
Route Template (OpenShift)
{{- if .Values.route.enabled }}
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: {{ include "eagle.fullname" . }}
labels:
{{- include "eagle.labels" . | nindent 4 }}
spec:
{{- if .Values.route.host }}
host: {{ .Values.route.host }}
{{- end }}
path: {{ .Values.route.path }}
to:
kind: Service
name: {{ include "eagle.fullname" . }}
weight: 100
port:
targetPort: {{ .Values.service.port }}-tcp
tls:
termination: {{ .Values.route.tls.termination }}
insecureEdgeTerminationPolicy: {{ .Values.route.tls.insecureEdgeTerminationPolicy }}
{{- end }}
Differences Between Repos
| Aspect | eagle-public | eagle-admin | eagle-api |
|---|---|---|---|
| Port | 8080 (nginx) | 8080 (nginx) | 3000 (Node.js) |
| Route path | / |
/admin/ |
/api |
| Route service | rproxy (via nginx) |
rproxy (via nginx) |
Direct to eagle-api |
| HPA | Yes (in prod) | Optional | Optional |
| Node Version | 24.x | 24.x | 22.x |
| Secrets | None (runtime config) | None (runtime config) | MongoDB, Minio, CHES, GETOK |
eagle-api Specific Configuration
Secret Management: eagle-api requires multiple secrets for database and external services:
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: eagle-api-mongodb
key: MONGODB_USERNAME
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: eagle-api-minio
key: MINIO_ACCESS_KEY
# ... additional secrets
Database Migrations
eagle-api uses migrate-mongo v14 for schema changes. Migration files live in migrations/ and are tracked in the changelog collection in MongoDB.
Runner: run_migration.js handles connection setup and runs all pending migrations. It uses ESM dynamic import (await import('migrate-mongo')) because migrate-mongo v14 is ESM-only.
Env vars: The migration runner uses the same env vars as the API deployment — MONGODB_USERNAME, MONGODB_PASSWORD, MONGODB_SERVICE_HOST, MONGODB_DATABASE, MONGODB_AUTHSOURCE (defaults to admin).
Running Migrations
Option 1: Helm pre-upgrade hook (recommended for deployments)
The migration job is a Helm pre-upgrade hook that runs before new pods roll out. Disabled by default — enable it when deploying a release with new migrations:
# Dev
helm upgrade --install eagle-api ./helm/eagle-api \
--namespace 6cdc9e-dev \
--values ./helm/eagle-api/values-dev.yaml \
--set image.tag=dev \
--set migrations.enabled=true \
--set migrations.image.tag=dev \
--wait --timeout=10m
# Test
helm upgrade --install eagle-api ./helm/eagle-api \
--namespace 6cdc9e-test \
--values ./helm/eagle-api/values-test.yaml \
--set image.tag=test \
--set migrations.enabled=true \
--set migrations.image.tag=test \
--wait --timeout=10m
# Prod
helm upgrade --install eagle-api ./helm/eagle-api \
--namespace 6cdc9e-prod \
--values ./helm/eagle-api/values-prod.yaml \
--set image.tag=prod \
--set migrations.enabled=true \
--set migrations.image.tag=prod \
--wait --timeout=10m
The job auto-deletes on success (hook-delete-policy: hook-succeeded). If it fails, pods stay in Error state and the upgrade is blocked — see Troubleshooting below.
Important: Always set migrations.image.tag to match image.tag. The default is latest which may not have the current code.
Option 2: oc exec on the API pod (quick one-off runs)
This is the simplest approach — the running API pod already has MongoDB network access and credentials:
# Run pending migrations
oc exec -n 6cdc9e-prod deploy/eagle-api -- node run_migration.js
# Replace 6cdc9e-prod with 6cdc9e-dev or 6cdc9e-test for other environments
Option 3: Locally against local MongoDB
No credentials needed with yarn db:up:
cd eagle-api
node run_migration.js
Checking Applied Migrations
# From the API pod — check changelog collection
oc exec -n 6cdc9e-prod deploy/eagle-api -- node -e "
const {MongoClient} = require('mongodb');
const u = process.env.MONGODB_USERNAME, p = process.env.MONGODB_PASSWORD;
const h = process.env.MONGODB_SERVICE_HOST, d = process.env.MONGODB_DATABASE;
const a = process.env.MONGODB_AUTHSOURCE || 'admin';
const c = await MongoClient.connect('mongodb://'+u+':'+p+'@'+h+'/'+d+'?authSource='+a);
const rows = await c.db(d).collection('changelog').find().sort({appliedAt:-1}).limit(5).toArray();
rows.forEach(r => console.log(r.appliedAt.toISOString().slice(0,19), r.fileName));
c.close();
"
Writing a New Migration
Name migrations with a YYYYMMDDHHMMSS- timestamp prefix. The up() and down() functions receive a connected db (MongoDB Db instance) — do not open or close connections manually.
'use strict';
exports.up = async function (db) {
await db.collection('epic').createIndex(
{ _schemaName: 1, myField: 1 },
{ name: 'idx_my_new_index', collation: { locale: 'en', strength: 2 } }
);
};
exports.down = async function (db) {
await db.collection('epic').dropIndex('idx_my_new_index').catch(() => {});
};
Remember: All indexes must use collation: { locale: 'en', strength: 2 } — see API Architecture.
Migration Job Architecture
The Helm migration job (helm/eagle-api/templates/migration-job.yaml):
- Hook type:
pre-upgrade— runs before the new deployment rolls out - Image: Same as the API (
image-registry.openshift-image-registry.svc:5000/6cdc9e-tools/eagle-api:{tag}) - Command:
node run_migration.js - Pod label:
role: api-eagle-epic— required by theeagle-epic-allow-services-to-databaseNetworkPolicy to reach MongoDB port 27017 - Env vars: Inherits all config via
envFrom: configMapRef(same ConfigMap as the API deployment) plus MongoDB credentials from the secret - Cleanup:
hook-delete-policy: before-hook-creation,hook-succeeded— deletes previous job on new run, deletes successful job immediately - Retries:
backoffLimit: 3— retries up to 3 times on failure - TTL:
ttlSecondsAfterFinished: 300— Kubernetes garbage collects after 5 minutes
Troubleshooting Migrations
| Problem | Cause | Fix |
|---|---|---|
Server selection timed out |
Migration pod missing role: api-eagle-epic label |
Ensure chart has correct label; or use oc exec deploy/eagle-api -- node run_migration.js |
Cannot find module 'run_migration.js' |
Image tag doesn't have the file | Set --set migrations.image.tag=dev (or test/prod) to match the deployed image |
Authentication failed |
Wrong env vars or missing AUTHSOURCE | Verify secret matches; run_migration.js defaults authSource to admin |
Helm stuck in pending-upgrade |
Failed hook blocks further upgrades | oc delete job eagle-api-migration-{revision} -n 6cdc9e-{env} then helm rollback eagle-api {last-good-revision} -n 6cdc9e-{env} |
| Migration already applied | Listed in changelog collection |
This is normal — run_migration.js skips already-applied migrations |
Malformed Response[UNKNOWN COMMAND] (ClamAV, not migration) |
Using deprecated clamav.js library |
Use clamscan v2.4.0+ which supports INSTREAM protocol |
Key Dependencies:
- axios: ^1.8.2 - HTTP client for document streaming
- sharp: ~0.33.5 - Image processing (thumbnail generation)
- jwks-rsa: ^3.2.2 - Keycloak JWT validation
- mongodb: ^4.17.2 - Database driver
- mongoose: ~6.13.3 - MongoDB ODM
Troubleshooting
Pods Not Starting
# Check pod status
oc get pods -n 6cdc9e-dev -l app.kubernetes.io/name=eagle-public
# Check pod logs
oc logs -n 6cdc9e-dev deployment/eagle-public
# Describe pod for events
oc describe pod -n 6cdc9e-dev -l app.kubernetes.io/name=eagle-public
Image Pull Errors
# Verify image exists
oc get imagestreamtag -n 6cdc9e-tools eagle-public:dev
# Check image pull secret
oc get secrets -n 6cdc9e-dev | grep pull
Resource Quota Issues
# Check namespace quotas
oc describe quota -n 6cdc9e-dev
# Check limit ranges
oc describe limitrange -n 6cdc9e-dev
Related Documentation
- Deployment Pipeline - CI/CD workflow details
- Rollback Procedures - How to rollback deployments
- Architecture Overview - System architecture