Docker Development Environment - hackforla/tdm-calculator GitHub Wiki

Docker Development Environment

  1. Overview
  2. Setting up the development environment
    1. Copy development .env file
      1. For external developers
      2. For TDM team members
    2. Build the docker images
    3. Run the development environment
    4. How to access the development environment
      1. Frontend
      2. Backend
      3. Sendgrid mock
      4. Database
    5. The npm run env command
      1. Environments
      2. Actions
  3. Explanations
    1. compose.yml file
      1. Services
        1. Client service
        2. API service (server)
        3. Database migration service
        4. Database initialization service
        5. Database service
        6. Sendgrid mock service
    2. Configurations
      1. Client settings
      2. Root .env
      3. Client settings overrides: env/docker/vite
      4. External URLs overrides
      5. Server settings overrides
      6. Database settings overrides
      7. Mock sendgrid settings overrides
      8. Host database settings overrides
      9. Host mock sendgrid settings overrides
    3. Scripts
    4. package.json changes
  4. Helpful tools/commands
  5. Resources
  6. Decisions made

Overview

For development purposes, you may want to run the web api server as a docker container with NODE_ENV="development". This allows you to modify node/express code and have the container watch for changes to files and automatically restart the express server using nodemon. This gives you a development experience similar to what you would have if you just ran express on your native machine.

The main advantage of doing development with docker hosting the express server is that the environment will exactly replicate the production environment as far as node and npm versions and versions of all the npm packages.

This uses the compose.yml docker compose script, which, in turn, uses the Dockerfile.dev build instructions to build an image that is suitable for running during development. A production build uses a slimmed-down image defined in Dockerfile and explained in the Deployment section.

Setting up the development environment

Copy development .env file

For external developers

Create a blank server/.env file.

touch server/.env

For TDM team members

Use the dotenv file from gDrive in place of .env.example

Move the dotenv file into the server/ directory

# Rename to .env
mv dotenv server/.env

Build the docker images

# build the docker images: api and client
npm run env -- all build

Run the development environment

Choose one of the following setups.

---
config:
  quadrantChart:
    chartWidth: 400
    chartHeight: 400
  themeVariables:
    quadrant1TextFill: "ff0000"
---
quadrantChart
    title Development Environment Choices
    x-axis app on host --> app in docker
    y-axis shared DB --> DB in docker
    quadrant-1 all - All in docker
    quadrant-2 local-db - Local DB
    quadrant-3 shared-db - Shared DB
    quadrant-4 app - App in docker
Loading
[shared-db] Running the app on host + connecting to the shared database (current setup)
  • This is the current way to run development environment.
  • Be sure to run npm install in the server/ directory and npm install --legacy-peer-deps in the client/ directory before running this command.
# start the app locally
npm run env -- shared-db

# app is running at http://localhost:3000

# do development work

# stop the app locally
ctrl-c
[app] Running the app in docker + connecting to the shared database (containerized app)
  • This might be a good replacement for the current setup.
# start the development environment
npm run env -- app up -d

# app is running at http://localhost:3001

# do development work

# stop the development environment
npm run env -- app down
[all] Running everything in docker
# start the development environment
npm run env -- all up -d

# app is running at http://localhost:3001
# sendgrid mock service is at http://localhost:7001

# do development work

# stop the development environment
npm run env -- all down
# add -v flag to remove the volumes: docker compose down -v
[docker-db] Running the database in docker + running the app on host
# start the database and sendgrid mock service in docker
npm run env -- local-db up -d

# app is running at http://localhost:3000
# sendgrid mock service is at http://localhost:7001

# do development work

# stop the app locally
ctrl-c

# stop the database and sendgrid mock service
npm run env -- local-db down

How to access the development environment

From the host machine, we can access the services in the Docker containers via HTTP ports.

Frontend

Backend

  • The backend on the host is exposed at http://localhost:5001. We avoid using port 5000 since macs use it for AirPlay along with port 7000.
  • The backend in the Docker container is not exposed by default, but can be exposed at http://localhost:5002 by uncommenting it in the compose file.

Sendgrid mock

  • The mock sendgrid service is exposed at http://localhost:7001. We avoid using port 7000 since macs use it for AirPlay along with port 5000.
  • Choose the html view to see the link being sent.

Database

  • Our shared Azure development database configured in server/.env.
  • The database in the Docker container is exposed at http://localhost:1434.

The npm run env command

The npm run env command is a wrapper around the docker compose command. It is the main entry point for running most configurations of the development environment.

Everything after the -- is passed to the docker compose command.

npm run env -- <environment> <action> [--flags]

Environments

  1. all - Run all services in Docker containers.
  2. app - Run the app in Docker containers and connect to the shared database.
  3. local-db - Run the app locally and the database and sendgrid mock services in Docker containers.
  4. shared-db - Run the app locally and connect to the shared database. This one doesn't really use Docker, but just follows the same command structure.

Actions

See comments in scripts/start-env.sh for more details.

  1. Build - do this once to build the images. Run this command every time you make changes to the Dockerfile files or to the dependencies.

    npm run env -- all build
    # add --no-cache flag to build the images without using the cache
    # add --progress=plain flag to avoid the progress bar
    # add --pull flag to pull the images
  2. Up - do this to start the development environment

    npm run env -- all up -d
    # remove -d flag to start in foreground mode
  3. Down - do this to stop the docker compose environment

    npm run env -- all down
    # add -v flag to remove the volumes, meaning deleting the database data
    # add --remove-orphans flag to remove containers for services not defined in the compose file
  4. Config - do this to inspect the docker compose environment for debugging

    npm run env -- all config
    # add --environment flag to show environment variables
  5. Logs - do this to inspect the docker compose logs

    npm run env -- all logs --follow

Explanations

compose.yml file

  1. Six (6) services to run the software

    • client
    • api
    • db-migrate
    • db-init
    • db
    • sendgrid
  2. One (1) volume to persist database data

    volumes:
        sqlvolume:
  3. .env files

    ├── env
    │   ├── docker
    │   │   ├── urls.example
    │   │   ├── sendgrid.example
    │   │   ├── node.example
    │   │   ├── vite.example
    │   │   └── db.example
    │   └── host
    │       ├── sendgrid.example
    │       └── db.example
    ├── client
    │   └── .env
    ├── server
    │   ├── .env
    │   └── .env.example
    └── .env.example
    1. The .example files are starting configs that will be "enabled" by being copied and removing the .example suffix.
    2. The env/ directory contains config files that override the client and server .env files.
      1. docker/ contains configs used by Docker environments.
      2. host/ contains configs used by the host environment.
    3. The client and server contain their own config files.
    4. The root .env file contains configuration for the compose file.

Services

Client service
client:
  build:
    context: ./client
    dockerfile: Dockerfile.dev
  ports:
    - "$CLIENT_EXPOSED_PORT:$CLIENT_PORT"
  env_file:
    - .env
    - path: ./env/docker/vite
      required: false
  volumes:
    - ./client/:/usr/app
    - /usr/app/node_modules
  depends_on:
    api:
      condition: service_healthy
      restart: true
  healthcheck:
    test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$CLIENT_PORT || exit 1"]
    interval: 10s
    retries: 10
    start_period: 10s
    timeout: 3s
  1. Builds Dockerfile.dev image from /client directory
  2. Runs on port CLIENT_EXPOSED_PORT
  3. The container uses environment variables from .env file
  4. Mounts /client directory to /usr/app
  5. Adds a volume for node dependencies /usr/app/node_modules.
  6. Waits for api service to be healthy
  7. Runs healthcheck every 10 seconds
    • checks that http://localhost:$CLIENT_PORT is reachable
API service (server)
api:
  build:
    context: ./server
    dockerfile: Dockerfile.dev
  # uncomment to debug on localhost
  # ports:
  #   - "5002:$PORT"
  env_file:
    - .env
    - path: env/docker/node
      required: false
    - path: env/docker/db
      required: false
    - path: env/docker/urls
      required: false
    - path: env/docker/sendgrid
      required: false
  volumes:
    - ./server/:/usr/app
    - /usr/app/node_modules
  depends_on:
    db-migrate:
      condition: service_completed_successfully
    db:
      condition: service_healthy
      restart: true
    sendgrid:
      condition: service_started
  healthcheck:
    test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$PORT/api/calculations || exit 1"]
    interval: 10s
    retries: 10
    start_period: 10s
    timeout: 3s
  1. Builds Dockerfile.dev image from /server directory
  2. Exposed port 5002 if uncommented.
    • There doesn't seem to be a good reason to do this.
  3. The container uses environment variables from .env file
  4. Mounts /server directory to /usr/app
  5. Adds a volume for node dependencies /usr/app/node_modules.
  6. Waits for db service to be healthy
  7. Waits for db-migrate service to be completed successfully
  8. Waits for sendgrid service to be started
  9. Runs healthcheck every 10 seconds
    • checks that http://localhost:$PORT/api/calculations is reachable
Database migration service
db-migrate:
  image: redgate/flyway
  volumes:
    - ./server/db/migration:/flyway/sql
  depends_on:
    db-init:
      condition: service_completed_successfully
    db:
      condition: service_healthy
      restart: true
  command: [
      "-user=${DOCKER_DB_USER_NAME:?error}",
      "-password=${DOCKER_DB_PASSWORD:?error}",
      "-url=jdbc:sqlserver://${DOCKER_DB_SERVER_NAME:?error}:${DOCKER_DB_SERVER_PORT:?error};databaseName=${SQL_DATABASE_NAME:?error};trustServerCertificate=true",
      # uncomment for debug messages
      # "-X",
      "migrate",
    ]
  1. Uses the same image as the db service.
  2. Mounts /server/db/migration directory to /flyway/sql, where flyway will look for migration scripts
  3. Waits for db-init service to be completed successfully
  4. Waits for db service to be healthy
  5. Runs flyway migrate command to apply database migrations.
Database initialization service
db-init:
  image: "mcr.microsoft.com/mssql/server:2019-latest"
  volumes:
    - ./server/db:/db
  depends_on:
    db:
      condition: service_healthy
      restart: true
  entrypoint:
    [
      "/bin/sh",
      "-c",
      '/opt/mssql-tools18/bin/sqlcmd -C -S ${DOCKER_DB_SERVER_NAME:?error} -U ${DOCKER_DB_USER_NAME:?error} -P ${DOCKER_DB_PASSWORD:?error} -Q "IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = ''tdmdev'') CREATE DATABASE tdmdev; ELSE PRINT ''Database \"${DOCKER_DB_SERVER_NAME:?error}\" already exists. Operation successful.''" -b',
    ]
  1. Uses the same image as the db service.
  2. Mounts /server/db directory to /db.
  3. Waits for db service to be healthy
  4. Runs sqlcmd command to create the database
Database service
db:
  image: "mcr.microsoft.com/mssql/server:2019-latest"
  volumes:
    - sqlvolume:/var/opt/mssql
  ports:
    # exposed on localhost 1434
    - "1434:1433"
  environment:
    ACCEPT_EULA: Y
    SA_PASSWORD: Dogfood1!
    MSSQL_PID: Express
  healthcheck:
    test:
      [
        "CMD-SHELL",
        "/opt/mssql-tools18/bin/sqlcmd -C -S ${DOCKER_DB_SERVER_NAME:?error} -U ${DOCKER_DB_USER_NAME:?error} -P ${DOCKER_DB_PASSWORD:?error} -Q 'SELECT 1' -b",
      ]
    interval: 10s
    retries: 10
    start_period: 10s
    timeout: 3s
  1. Uses SQL Server 2019 image
  2. Mounts volume sqlvolume to /var/opt/mssql
  3. Exposed on port 1434 for external access from DBeaver.
  4. Sets environment variables for the container.
  5. Runs healthcheck
    • Checks that sqlcmd command can connect to the database
Sendgrid mock service
sendgrid:
  image: ghashange/sendgrid-mock:1.12.0
  ports:
    - "${SENDGRID_EXPOSED_PORT:-7001}:3000"
  environment:
    API_KEY: ${SENDGRID_API_KEY:?error}
  env_file:
    - path: env/docker/sendgrid
      required: false
  1. Uses ghashange/sendgrid-mock image
  2. Exposed on port 7001 unless SENDGRID_EXPOSED_PORT is set
  3. Sets API_KEY environment variable
    • The API_KEY used in the API server needs to match this to authenticate with sendgrid, and it does, because they're using the same environment variable.

Configurations

There are settings that override the .env file to support the docker environment. This section explains what they do.

The existing setting are left alone so that the server and client can run outside the Docker environment like normal.

Note that port numbers are internal to the container network. For example, the database is exposed to the host at port 1434, but it is running on port 1433 inside the container. So the app running on the host will need to connect to port 1434 while the app running in another container will need to connect to port 1433. This is why there are *_EXPOSED_PORT settings.

Some of the overrides are the same as the ones in the team's dotenv file. They're still there for external developers to get the environment up and running without that file.

Client settings

This is to support the configuration of the client running on the host.

CLIENT_PORT=3000

# Use to simulate different environments (DEV, UAT, PROD)
VITE_ENV=DEV

# Don't open browser if running in CI or Docker
VITE_OPEN_BROWSER=true

VITE_API_URL=http://localhost:5001
  1. VITE_ENV doesn't do anything, but it's there for future use.
  2. VITE_OPEN_BROWSER opens a web browser once the server is running.
  3. VITE_API_URL points to where the API server is running.

Root .env

This is for docker compose use only. The database will need to run even in configurations that don't use it. This is to satisfy the Docker Compose dependency checks.

# Database
DOCKER_DB_SERVER_NAME=db
DOCKER_DB_SERVER_PORT=1433
DOCKER_DB_USER_NAME=sa
DOCKER_DB_PASSWORD=Dogfood1!

# Client
CLIENT_PORT=3000
CLIENT_EXPOSED_PORT=3001

# Server
PORT=5001

Whatever's using CLIENT_PORT should really be reading the client .env file for it. Whatever's using PORT should really be reading the server .env file for it.

Client settings overrides: env/docker/vite

CLIENT_PORT=3000
VITE_OPEN_BROWSER=false
VITE_API_URL=http://api:$PORT
  1. Don't launch a web browser, since it'll return an error and crash.
  2. Set the API server's URL to the api container.

External URLs overrides

This defines how a client or user can connect to the client and server from outside the development environment.

CLIENT_URL=http://localhost:$CLIENT_EXPOSED_PORT
SERVER_URL=http://localhost:$PORT
  1. Expose the client on port 3001.
  2. CLIENT_URL is used in the user registration email link and directs the user to the website.
  3. SERVER_URL is not being used.

Server settings overrides

PORT=5001
NODE_OPTIONS=--trace-deprecation
JWT_SECRET_KEY=TeachAPigToSing

Database settings overrides

SQL_SERVER_NAME=${DOCKER_DB_SERVER_NAME:?error}
SQL_SERVER_INSTANCE=
SQL_SERVER_PORT=${DOCKER_DB_SERVER_PORT:?error}
SQL_DATABASE_NAME=tdmdev
SQL_USER_NAME=${DOCKER_DB_USER_NAME:?error}
SQL_PASSWORD=${DOCKER_DB_PASSWORD:?error}
SQL_ENCRYPT=true
SQL_TRUST_SERVER_CERTIFICATE=true

Mock sendgrid settings overrides

SENDGRID_API_KEY=SG.secretAPIKey
SENDGRID_EXPOSED_PORT=7001
SENDGRID_HOST=sendgrid
  1. SENDGRID_API_KEY is used by sendgrid-mock to "athenticate".
  2. The API_KEY used in the API server needs to match this to authenticate with sendgrid, and it does, because they're using the same environment variable.
  3. Expose sendgrid-mock on port 7001.

Host database settings overrides

The port number needs to be hardcoded because variable substitution isn't available outside the docker compose environment.

SQL_SERVER_NAME=localhost
SQL_SERVER_PORT=1434
SQL_USER_NAME=sa
SQL_PASSWORD=Dogfood1!
SQL_TRUST_SERVER_CERTIFICATE=true

Host mock sendgrid settings overrides

The host value needs to be hardcoded because variable substitution isn't available outside the docker compose environment.

SENDGRID_API_KEY=SG.secretAPIKey
SENDGRID_EXPOSED_PORT=7001
SENDGRID_HOST=localhost

Scripts

The scripts are defined in the root package.json file and include start-env.sh and copy-env.sh.

start-env.sh calls copy-env.sh to enable the relevant override files, and then starts the development environment by calling docker compose or an npm run command, or both. Enabling an override file means copying it to a file that doesn't end in .example.

package.json changes

  1. Added dotenvx package.

    • This is the successor to dotenv.
    • It allows us to load multiple env files.
  2. Modified scripts section in the server.

    "scripts": {
        "start": "nodemon",
        "start-local": "dotenvx run -f .env -f ../env/host/sendgrid -f ../env/host/db --overload -- nodemon"
    }
    1. The host npm will run the start script like before to connect to the shared database.
    2. The host npm will run the start-local script to connect to the local database, which will load the host override files from another directory.
    3. The Dockerized npm will run the start script, and have the .env file loaded by docker compose.
  3. Modified scripts section in the root

    "scripts": {
        "start": "concurrently --names \"BACKEND,FRONTEND\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run start:backend\" \"npm run start:frontend\" ",
        "start:local-db": "concurrently --names \"BACKEND,FRONTEND\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run start:backend-local\" \"npm run start:frontend\" ",
        "start:backend": "cd ./server && npm start",
        "start:backend-local": "cd ./server && npm run start-local",
        "env": "bash scripts/start-env.sh"
    }
    1. The start and start:backend scripts are the same as before
    2. The start:local-db runs start:frontend and the new start:backend-local command
    3. The start:backend-local runs the start-local npm command in the server directory, which connects to the local dockerized database.
    4. The env script runs the start-env.sh script, which starts one of the development environments, or it can end up calling the start or the start:local-db script here.

Helpful tools/commands

  1. LazyDocker tool to manage docker containers in the terminal

  2. Dive utility to inspect docker images

  3. Inspect docker compose config

    • It's helpful for looking at the compose file after the enivronment variables have been expanded.
    docker compose config
  4. Inspect docker compose logs

    docker compose logs
  5. Debug docker builds

    # Build the api image
    docker compose build api --no-cache --progress=plain
    • Use the --no-cache option to build the image without using the cache.
    • Use the --progress=plain option to avoid the progress bar.

    See this page about debugging docker builds.

  6. Start a shell in a running container

    # Start a shell in the running api container.
    docker compose exec api sh
    # The alpine image doesn't have bash, so we need to use the `sh` command.
    
    # Start a shell in using the docker image for the api container.
    docker compose run --rm api sh
    # `--rm` removes the container after the shell is closed.

Resources

Decisions made

  1. Used node lts-alpine base image for the client and api server.

    • This is the same as in the production Dockerfile.
    • The volta section in package.json has node 20, which also works.
  2. Used SQL Server 2019 base image for the database.

    • This is from the wiki instructions, although I've also seen 2017 elsewhere.
    • We might want to move to SQL Server 2022 in the future, but it depends on what the version the production database server is. It's straightforward to change.
  3. Used redgate/flyway rather than node-flywaydb or node-flyway.

    • The good:
      1. It has a docker image option for ease of use with docker compose.
      2. It's the underlying application which the node packages use.
    • The bad:
      1. redgate/flyway is not callable from node like the other options.
      2. Our currently used node-flywaydb's repo was archived on 2024-01-20, meaning it's considered outdated.
      3. node-flyway is maintained but doesn't yet have documentation on cli usage.
    • The bottomline:
      1. It's the best choice for docker usage.
        • The other options are either outdated or doesn't support cli.
        • We really just want to call it from the cli anyway, even though we have a node app.
  4. Added sendgrid service

    • The good:
      1. This enables running sendgrid email tests locally.
        • No need to connect to the network for development
    • The bottomline:
      1. It's the last piece to have everything working locally inside docker containers.
  5. Added db-init service

    • The good:
      1. This enables the database to be created if it doesn't exist.
        • It saves a manual step.
      2. It uses the same database image as the db.
        • It saves space and contains all the db utilities.
    • The bad:
      1. This is why we need to move the .env file to the root directory.
        • The command being run makes use of the server environment variables for the database connection.
    • The bottomline:
      1. It saves a manual step.
        • We can simplify the setup documentation.
  6. Added db-migrate service

    • The good:
      1. This enables running database migrations.
        • It saves a manual step.
      2. It uses the same database image as the db.
        • It saves space and contains all the db utilities.
    • The bad:
      1. This is why we need to move the .env file to the root directory.
        • The command being run makes use of the server environment variables for the database connection.
    • The bottomline:
      1. It saves a manual step.
        • We can simplify the setup documentation.
  7. Use a docker volume for the database data.

    • The good:
      1. This makes the database persist between container restarts.
        • The database server can be upgraded without losing data, because the data is stored in a volume.
    • The bottomline:
      1. It lets us test database server upgrades more easily.
  8. External port number mappings different from the defaults.

    • The good:
      1. It lets us use/test different services from outside the containers.
      2. It allows us to potentially run the local development environment and the docker containers side-by-side.
        • These are value convenient to me, but they can be changed.
          1. client is exposed on port CLIENT_EXPORTED_PORT => 3001
            • We need this to use the client from a web browser.
          2. api is not exposed since I can't think of a use case for it
            • If it ever becomes necessary, I would suggest port 5002 since the local server port is 5001
          3. db is exposed on port DB_EXPORTED_PORT => 1434
            • An external client can connect to the database server.
          4. sendgrid is exposed on port SENDGRID_EXPORTED_PORT => 7001
            • This is from the example I used and for no other reason.
    • The bottomline:
      1. It keeps the existing local development environment setup working and allows adding the docker containers running side-by-side.
  9. Use docker cache mount to reduce rebuild time.

    • The good:
      1. This makes the rebuilds faster.
        • It caches the downloaded data, so it doesn't have to be downloaded again.
    • The bottomline:
      1. It makes the rebuilds faster.
  10. Convert vite.config.js to read environment variables.

    • The good:
      1. This makes it possible to run vite with in different environments.
        • The old values are converted into defaults.
    • The bottomline:
      1. This is necessary to make it run in both the local host and in docker.
  11. Added a root .env file along with several *.example env files in the env directory.

    • The good:
      1. This maintains a non-breaking change.
        • It keeps the local development environment settings as the default while having optional override settings for docker and host environments.
        • The included scripts automatically applies the correct override files for the environment the developer wants to run.
    • The bad:
      1. It adds a lot of env files.
    • Future work:
      1. Some of the override files can be combined to reduce the number of env files and some settings can be moved to different files if they make more sense there.
    • The bottomline:
      1. This adds docker as an option to TDM developers.
  12. There are 6 containers from 5 images.

    • The good:
      1. This is how a docker environment should work.
        • Each service runs in its own container.
    • The bad:
      1. The images take up a lot of space: ~3.5GB total.
        • SQL Server 2019 = 1.49GB
        • flyway = 1.09GB
        • sendgrid-mock = 214MB
        • api = 279MB (base node:lts-alpine image is ~130MB)
        • client = 461MB (base node:lts-alpine image is ~130MB)
    • Future work:
      1. Can add an adminer container for database management.
        • It moves the functionality into the docker environment, where we currently have to install DBeaver or another client separately.
        • It's low priority but an option.
    • The bottomline:
      1. The docker environment becomes a 1-step setup for external developers.
  13. Replaced dotenv with dotenvx.

    • The good:
      1. It enables us to load multiple environment files.
        • Just pass the .env file path to env-cmd before starting the application
    • The bottomline:
      1. It doesn't affect the existing setup.
  14. Added docker compose service dependencies.

    • The good:
      1. This imposes the correct startup order
        • It makes sure that the database server is responsive, the database is created, and the migrations are run before the server starts.
        • The ordering is db => db-init => db-migrate => api => client.
        • Also sendgrid => api.
    • The bad:
      1. This is why we need to move the .env file to the root directory.
        • The db healthcheck makes use of the server environment variables for the database connection.
        • The client healthcheck command makes use of the client environment variables for the client connection.
        • The api healthcheck command makes use of the server environment variables for the api endpoint connection.
    • The bottomline:
      1. It makes containers always run in the same order.
⚠️ **GitHub.com Fallback** ⚠️