Docker Development Environment - hackforla/tdm-calculator GitHub Wiki
- Overview
- Setting up the development environment
- Explanations
- Helpful tools/commands
- Resources
- Decisions made
For development purposes, you may want to run the web api server as a docker container with NODE_ENV="development"
. This allows you to modify node
/express
code and have the container watch for changes to files and automatically restart the express server using nodemon
. This gives you a development experience similar to what you would have if you just ran express on your native machine.
The main advantage of doing development with docker hosting the express server is that the environment will exactly replicate the production environment as far as node and npm versions and versions of all the npm packages.
This uses the compose.yml
docker compose script, which, in turn, uses the Dockerfile.dev
build instructions to build an image that is suitable for running during development. A production build uses a slimmed-down image defined in Dockerfile
and explained in the Deployment section.
Create a blank server/.env
file.
touch server/.env
Use the dotenv
file from gDrive in place of .env.example
Move the dotenv
file into the server/
directory
# Rename to .env
mv dotenv server/.env
# build the docker images: api and client
npm run env -- all build
Choose one of the following setups.
---
config:
quadrantChart:
chartWidth: 400
chartHeight: 400
themeVariables:
quadrant1TextFill: "ff0000"
---
quadrantChart
title Development Environment Choices
x-axis app on host --> app in docker
y-axis shared DB --> DB in docker
quadrant-1 all - All in docker
quadrant-2 local-db - Local DB
quadrant-3 shared-db - Shared DB
quadrant-4 app - App in docker
[shared-db] Running the app on host + connecting to the shared database (current setup)
- This is the current way to run development environment.
- Be sure to run
npm install
in theserver/
directory andnpm install --legacy-peer-deps
in theclient/
directory before running this command.
# start the app locally
npm run env -- shared-db
# app is running at http://localhost:3000
# do development work
# stop the app locally
ctrl-c
[app] Running the app in docker + connecting to the shared database (containerized app)
- This might be a good replacement for the current setup.
# start the development environment
npm run env -- app up -d
# app is running at http://localhost:3001
# do development work
# stop the development environment
npm run env -- app down
[all] Running everything in docker
# start the development environment
npm run env -- all up -d
# app is running at http://localhost:3001
# sendgrid mock service is at http://localhost:7001
# do development work
# stop the development environment
npm run env -- all down
# add -v flag to remove the volumes: docker compose down -v
[docker-db] Running the database in docker + running the app on host
# start the database and sendgrid mock service in docker
npm run env -- local-db up -d
# app is running at http://localhost:3000
# sendgrid mock service is at http://localhost:7001
# do development work
# stop the app locally
ctrl-c
# stop the database and sendgrid mock service
npm run env -- local-db down
From the host machine, we can access the services in the Docker containers via HTTP ports.
- The frontend in the Docker container is exposed at http://localhost:3001.
- The frontend on the host is exposed at http://localhost:3000.
- The backend on the host is exposed at http://localhost:5001. We avoid using port 5000 since macs use it for AirPlay along with port 7000.
- The backend in the Docker container is not exposed by default, but can be exposed at http://localhost:5002 by uncommenting it in the compose file.
- The mock sendgrid service is exposed at http://localhost:7001. We avoid using port 7000 since macs use it for AirPlay along with port 5000.
- Choose the html view to see the link being sent.
- Our shared Azure development database configured in
server/.env
. - The database in the Docker container is exposed at http://localhost:1434.
The npm run env
command is a wrapper around the docker compose
command. It is the main entry point for running most configurations of the development environment.
Everything after the --
is passed to the docker compose
command.
npm run env -- <environment> <action> [--flags]
- all - Run all services in Docker containers.
- app - Run the app in Docker containers and connect to the shared database.
- local-db - Run the app locally and the database and sendgrid mock services in Docker containers.
- shared-db - Run the app locally and connect to the shared database. This one doesn't really use Docker, but just follows the same command structure.
See comments in scripts/start-env.sh
for more details.
-
Build - do this once to build the images. Run this command every time you make changes to the
Dockerfile
files or to the dependencies.npm run env -- all build # add --no-cache flag to build the images without using the cache # add --progress=plain flag to avoid the progress bar # add --pull flag to pull the images
-
Up - do this to start the development environment
npm run env -- all up -d # remove -d flag to start in foreground mode
-
Down - do this to stop the
docker compose
environmentnpm run env -- all down # add -v flag to remove the volumes, meaning deleting the database data # add --remove-orphans flag to remove containers for services not defined in the compose file
-
Config - do this to inspect the
docker compose
environment for debuggingnpm run env -- all config # add --environment flag to show environment variables
-
Logs - do this to inspect the
docker compose
logsnpm run env -- all logs --follow
-
Six (6) services to run the software
- client
- api
- db-migrate
- db-init
- db
- sendgrid
-
One (1) volume to persist database data
volumes: sqlvolume:
-
.env
files├── env │ ├── docker │ │ ├── urls.example │ │ ├── sendgrid.example │ │ ├── node.example │ │ ├── vite.example │ │ └── db.example │ └── host │ ├── sendgrid.example │ └── db.example ├── client │ └── .env ├── server │ ├── .env │ └── .env.example └── .env.example
- The
.example
files are starting configs that will be "enabled" by being copied and removing the.example
suffix. - The
env/
directory contains config files that override the client and server.env
files.-
docker/
contains configs used by Docker environments. -
host/
contains configs used by the host environment.
-
- The client and server contain their own config files.
- The root
.env
file contains configuration for the compose file.
- The
client:
build:
context: ./client
dockerfile: Dockerfile.dev
ports:
- "$CLIENT_EXPOSED_PORT:$CLIENT_PORT"
env_file:
- .env
- path: ./env/docker/vite
required: false
volumes:
- ./client/:/usr/app
- /usr/app/node_modules
depends_on:
api:
condition: service_healthy
restart: true
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$CLIENT_PORT || exit 1"]
interval: 10s
retries: 10
start_period: 10s
timeout: 3s
- Builds
Dockerfile.dev
image from/client
directory - Runs on port
CLIENT_EXPOSED_PORT
- The container uses environment variables from
.env
file - Mounts
/client
directory to/usr/app
- Adds a volume for node dependencies
/usr/app/node_modules
. - Waits for
api
service to be healthy - Runs healthcheck every 10 seconds
- checks that
http://localhost:$CLIENT_PORT
is reachable
- checks that
api:
build:
context: ./server
dockerfile: Dockerfile.dev
# uncomment to debug on localhost
# ports:
# - "5002:$PORT"
env_file:
- .env
- path: env/docker/node
required: false
- path: env/docker/db
required: false
- path: env/docker/urls
required: false
- path: env/docker/sendgrid
required: false
volumes:
- ./server/:/usr/app
- /usr/app/node_modules
depends_on:
db-migrate:
condition: service_completed_successfully
db:
condition: service_healthy
restart: true
sendgrid:
condition: service_started
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$PORT/api/calculations || exit 1"]
interval: 10s
retries: 10
start_period: 10s
timeout: 3s
- Builds
Dockerfile.dev
image from/server
directory - Exposed port 5002 if uncommented.
- There doesn't seem to be a good reason to do this.
- The container uses environment variables from
.env
file - Mounts
/server
directory to/usr/app
- Adds a volume for node dependencies
/usr/app/node_modules
. - Waits for
db
service to be healthy - Waits for
db-migrate
service to be completed successfully - Waits for
sendgrid
service to be started - Runs healthcheck every 10 seconds
- checks that
http://localhost:$PORT/api/calculations
is reachable
- checks that
db-migrate:
image: redgate/flyway
volumes:
- ./server/db/migration:/flyway/sql
depends_on:
db-init:
condition: service_completed_successfully
db:
condition: service_healthy
restart: true
command: [
"-user=${DOCKER_DB_USER_NAME:?error}",
"-password=${DOCKER_DB_PASSWORD:?error}",
"-url=jdbc:sqlserver://${DOCKER_DB_SERVER_NAME:?error}:${DOCKER_DB_SERVER_PORT:?error};databaseName=${SQL_DATABASE_NAME:?error};trustServerCertificate=true",
# uncomment for debug messages
# "-X",
"migrate",
]
- Uses the same image as the
db
service. - Mounts
/server/db/migration
directory to/flyway/sql
, where flyway will look for migration scripts - Waits for
db-init
service to be completed successfully - Waits for
db
service to be healthy - Runs flyway migrate command to apply database migrations.
db-init:
image: "mcr.microsoft.com/mssql/server:2019-latest"
volumes:
- ./server/db:/db
depends_on:
db:
condition: service_healthy
restart: true
entrypoint:
[
"/bin/sh",
"-c",
'/opt/mssql-tools18/bin/sqlcmd -C -S ${DOCKER_DB_SERVER_NAME:?error} -U ${DOCKER_DB_USER_NAME:?error} -P ${DOCKER_DB_PASSWORD:?error} -Q "IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = ''tdmdev'') CREATE DATABASE tdmdev; ELSE PRINT ''Database \"${DOCKER_DB_SERVER_NAME:?error}\" already exists. Operation successful.''" -b',
]
- Uses the same image as the
db
service. - Mounts
/server/db
directory to/db
. - Waits for
db
service to be healthy - Runs sqlcmd command to create the database
db:
image: "mcr.microsoft.com/mssql/server:2019-latest"
volumes:
- sqlvolume:/var/opt/mssql
ports:
# exposed on localhost 1434
- "1434:1433"
environment:
ACCEPT_EULA: Y
SA_PASSWORD: Dogfood1!
MSSQL_PID: Express
healthcheck:
test:
[
"CMD-SHELL",
"/opt/mssql-tools18/bin/sqlcmd -C -S ${DOCKER_DB_SERVER_NAME:?error} -U ${DOCKER_DB_USER_NAME:?error} -P ${DOCKER_DB_PASSWORD:?error} -Q 'SELECT 1' -b",
]
interval: 10s
retries: 10
start_period: 10s
timeout: 3s
- Uses SQL Server 2019 image
- Mounts volume
sqlvolume
to/var/opt/mssql
- Exposed on port 1434 for external access from DBeaver.
- Sets environment variables for the container.
- Runs healthcheck
- Checks that sqlcmd command can connect to the database
sendgrid:
image: ghashange/sendgrid-mock:1.12.0
ports:
- "${SENDGRID_EXPOSED_PORT:-7001}:3000"
environment:
API_KEY: ${SENDGRID_API_KEY:?error}
env_file:
- path: env/docker/sendgrid
required: false
- Uses
ghashange/sendgrid-mock
image - Exposed on port 7001 unless SENDGRID_EXPOSED_PORT is set
- Sets API_KEY environment variable
- The API_KEY used in the API server needs to match this to authenticate with sendgrid, and it does, because they're using the same environment variable.
There are settings that override the .env
file to support the docker environment. This section explains what they do.
The existing setting are left alone so that the server and client can run outside the Docker environment like normal.
Note that port numbers are internal to the container network. For example, the database is exposed to the host at port 1434, but it is running on port 1433 inside the container. So the app running on the host will need to connect to port 1434 while the app running in another container will need to connect to port 1433. This is why there are *_EXPOSED_PORT
settings.
Some of the overrides are the same as the ones in the team's dotenv
file. They're still there for external developers to get the environment up and running without that file.
This is to support the configuration of the client running on the host.
CLIENT_PORT=3000
# Use to simulate different environments (DEV, UAT, PROD)
VITE_ENV=DEV
# Don't open browser if running in CI or Docker
VITE_OPEN_BROWSER=true
VITE_API_URL=http://localhost:5001
-
VITE_ENV
doesn't do anything, but it's there for future use. -
VITE_OPEN_BROWSER
opens a web browser once the server is running. -
VITE_API_URL
points to where the API server is running.
This is for docker compose use only. The database will need to run even in configurations that don't use it. This is to satisfy the Docker Compose dependency checks.
# Database
DOCKER_DB_SERVER_NAME=db
DOCKER_DB_SERVER_PORT=1433
DOCKER_DB_USER_NAME=sa
DOCKER_DB_PASSWORD=Dogfood1!
# Client
CLIENT_PORT=3000
CLIENT_EXPOSED_PORT=3001
# Server
PORT=5001
Whatever's using CLIENT_PORT
should really be reading the client .env
file for it.
Whatever's using PORT
should really be reading the server .env
file for it.
CLIENT_PORT=3000
VITE_OPEN_BROWSER=false
VITE_API_URL=http://api:$PORT
- Don't launch a web browser, since it'll return an error and crash.
- Set the API server's URL to the
api
container.
This defines how a client or user can connect to the client and server from outside the development environment.
CLIENT_URL=http://localhost:$CLIENT_EXPOSED_PORT
SERVER_URL=http://localhost:$PORT
- Expose the client on port 3001.
-
CLIENT_URL
is used in the user registration email link and directs the user to the website. -
SERVER_URL
is not being used.
PORT=5001
NODE_OPTIONS=--trace-deprecation
JWT_SECRET_KEY=TeachAPigToSing
SQL_SERVER_NAME=${DOCKER_DB_SERVER_NAME:?error}
SQL_SERVER_INSTANCE=
SQL_SERVER_PORT=${DOCKER_DB_SERVER_PORT:?error}
SQL_DATABASE_NAME=tdmdev
SQL_USER_NAME=${DOCKER_DB_USER_NAME:?error}
SQL_PASSWORD=${DOCKER_DB_PASSWORD:?error}
SQL_ENCRYPT=true
SQL_TRUST_SERVER_CERTIFICATE=true
SENDGRID_API_KEY=SG.secretAPIKey
SENDGRID_EXPOSED_PORT=7001
SENDGRID_HOST=sendgrid
-
SENDGRID_API_KEY
is used by sendgrid-mock to "athenticate". - The
API_KEY
used in the API server needs to match this to authenticate with sendgrid, and it does, because they're using the same environment variable. - Expose sendgrid-mock on port 7001.
The port number needs to be hardcoded because variable substitution isn't available outside the docker compose environment.
SQL_SERVER_NAME=localhost
SQL_SERVER_PORT=1434
SQL_USER_NAME=sa
SQL_PASSWORD=Dogfood1!
SQL_TRUST_SERVER_CERTIFICATE=true
The host value needs to be hardcoded because variable substitution isn't available outside the docker compose environment.
SENDGRID_API_KEY=SG.secretAPIKey
SENDGRID_EXPOSED_PORT=7001
SENDGRID_HOST=localhost
The scripts are defined in the root package.json
file and include start-env.sh
and copy-env.sh
.
start-env.sh
calls copy-env.sh
to enable the relevant override files, and then starts the development environment by calling docker compose
or an npm run
command, or both. Enabling an override file means copying it to a file that doesn't end in .example
.
-
Added
dotenvx
package.- This is the successor to
dotenv
. - It allows us to load multiple env files.
- This is the successor to
-
Modified
scripts
section in the server."scripts": { "start": "nodemon", "start-local": "dotenvx run -f .env -f ../env/host/sendgrid -f ../env/host/db --overload -- nodemon" }
- The host npm will run the
start
script like before to connect to the shared database. - The host npm will run the
start-local
script to connect to the local database, which will load the host override files from another directory. - The Dockerized npm will run the
start
script, and have the.env
file loaded by docker compose.
- The host npm will run the
-
Modified
scripts
section in the root"scripts": { "start": "concurrently --names \"BACKEND,FRONTEND\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run start:backend\" \"npm run start:frontend\" ", "start:local-db": "concurrently --names \"BACKEND,FRONTEND\" -c \"bgBlue.bold,bgMagenta.bold\" \"npm run start:backend-local\" \"npm run start:frontend\" ", "start:backend": "cd ./server && npm start", "start:backend-local": "cd ./server && npm run start-local", "env": "bash scripts/start-env.sh" }
- The
start
andstart:backend
scripts are the same as before - The
start:local-db
runsstart:frontend
and the newstart:backend-local
command - The
start:backend-local
runs thestart-local
npm command in the server directory, which connects to the local dockerized database. - The
env
script runs thestart-env.sh
script, which starts one of the development environments, or it can end up calling thestart
or thestart:local-db
script here.
- The
-
LazyDocker tool to manage docker containers in the terminal
-
Dive utility to inspect docker images
-
Inspect
docker compose
config- It's helpful for looking at the compose file after the enivronment variables have been expanded.
docker compose config
-
Inspect
docker compose
logsdocker compose logs
-
Debug docker builds
# Build the api image docker compose build api --no-cache --progress=plain
- Use the
--no-cache
option to build the image without using the cache. - Use the
--progress=plain
option to avoid the progress bar.
See this page about debugging docker builds.
- Use the
-
Start a shell in a running container
# Start a shell in the running api container. docker compose exec api sh # The alpine image doesn't have bash, so we need to use the `sh` command. # Start a shell in using the docker image for the api container. docker compose run --rm api sh # `--rm` removes the container after the shell is closed.
- flyway docker quick start
- docker compose startup order
- run SQL Server image with Docker
- flyway check with SQL Server as an example of assigning the url value for SQL Server
- docker CMD vs. ENTRYPOINT
- flyway Dockerfile
- use env vars in flyway parameters
- flyway parameters reference
-
Used
node lts-alpine
base image for the client and api server.- This is the same as in the production Dockerfile.
- The
volta
section inpackage.json
has node 20, which also works.
-
Used
SQL Server 2019
base image for the database.- This is from the wiki instructions, although I've also seen 2017 elsewhere.
- We might want to move to
SQL Server 2022
in the future, but it depends on what the version the production database server is. It's straightforward to change.
-
Used
redgate/flyway
rather thannode-flywaydb
ornode-flyway
.- The good:
- It has a docker image option for ease of use with
docker compose
. - It's the underlying application which the node packages use.
- It has a docker image option for ease of use with
- The bad:
-
redgate/flyway
is not callable from node like the other options. - Our currently used
node-flywaydb
's repo was archived on 2024-01-20, meaning it's considered outdated. -
node-flyway
is maintained but doesn't yet have documentation on cli usage.
-
- The bottomline:
- It's the best choice for docker usage.
- The other options are either outdated or doesn't support cli.
- We really just want to call it from the cli anyway, even though we have a node app.
- It's the best choice for docker usage.
- The good:
-
Added
sendgrid
service- The good:
- This enables running sendgrid email tests locally.
- No need to connect to the network for development
- This enables running sendgrid email tests locally.
- The bottomline:
- It's the last piece to have everything working locally inside docker containers.
- The good:
-
Added
db-init
service- The good:
- This enables the database to be created if it doesn't exist.
- It saves a manual step.
- It uses the same database image as the db.
- It saves space and contains all the db utilities.
- This enables the database to be created if it doesn't exist.
- The bad:
- This is why we need to move the
.env
file to the root directory.- The command being run makes use of the server environment variables for the database connection.
- This is why we need to move the
- The bottomline:
- It saves a manual step.
- We can simplify the setup documentation.
- It saves a manual step.
- The good:
-
Added
db-migrate
service- The good:
- This enables running database migrations.
- It saves a manual step.
- It uses the same database image as the db.
- It saves space and contains all the db utilities.
- This enables running database migrations.
- The bad:
- This is why we need to move the
.env
file to the root directory.- The command being run makes use of the server environment variables for the database connection.
- This is why we need to move the
- The bottomline:
- It saves a manual step.
- We can simplify the setup documentation.
- It saves a manual step.
- The good:
-
Use a docker volume for the database data.
- The good:
- This makes the database persist between container restarts.
- The database server can be upgraded without losing data, because the data is stored in a volume.
- This makes the database persist between container restarts.
- The bottomline:
- It lets us test database server upgrades more easily.
- The good:
-
External port number mappings different from the defaults.
- The good:
- It lets us use/test different services from outside the containers.
- It allows us to potentially run the local development environment and the docker containers side-by-side.
- These are value convenient to me, but they can be changed.
-
client
is exposed on portCLIENT_EXPORTED_PORT
=> 3001- We need this to use the client from a web browser.
-
api
is not exposed since I can't think of a use case for it- If it ever becomes necessary, I would suggest port 5002 since the local server port is 5001
-
db
is exposed on portDB_EXPORTED_PORT
=> 1434- An external client can connect to the database server.
-
sendgrid
is exposed on portSENDGRID_EXPORTED_PORT
=> 7001- This is from the example I used and for no other reason.
-
- These are value convenient to me, but they can be changed.
- The bottomline:
- It keeps the existing local development environment setup working and allows adding the docker containers running side-by-side.
- The good:
-
Use docker cache mount to reduce rebuild time.
- The good:
- This makes the rebuilds faster.
- It caches the downloaded data, so it doesn't have to be downloaded again.
- This makes the rebuilds faster.
- The bottomline:
- It makes the rebuilds faster.
- The good:
-
Convert
vite.config.js
to read environment variables.- The good:
- This makes it possible to run vite with in different environments.
- The old values are converted into defaults.
- This makes it possible to run vite with in different environments.
- The bottomline:
- This is necessary to make it run in both the local host and in docker.
- The good:
-
Added a root
.env
file along with several*.example
env files in theenv
directory.- The good:
- This maintains a non-breaking change.
- It keeps the local development environment settings as the default while having optional override settings for docker and host environments.
- The included scripts automatically applies the correct override files for the environment the developer wants to run.
- This maintains a non-breaking change.
- The bad:
- It adds a lot of env files.
- Future work:
- Some of the override files can be combined to reduce the number of env files and some settings can be moved to different files if they make more sense there.
- The bottomline:
- This adds docker as an option to TDM developers.
- The good:
-
There are 6 containers from 5 images.
- The good:
- This is how a docker environment should work.
- Each service runs in its own container.
- This is how a docker environment should work.
- The bad:
- The images take up a lot of space: ~3.5GB total.
- SQL Server 2019 = 1.49GB
- flyway = 1.09GB
- sendgrid-mock = 214MB
- api = 279MB (base
node:lts-alpine
image is ~130MB) - client = 461MB (base
node:lts-alpine
image is ~130MB)
- The images take up a lot of space: ~3.5GB total.
- Future work:
- Can add an
adminer
container for database management.- It moves the functionality into the docker environment, where we currently have to install
DBeaver
or another client separately. - It's low priority but an option.
- It moves the functionality into the docker environment, where we currently have to install
- Can add an
- The bottomline:
- The docker environment becomes a 1-step setup for external developers.
- The good:
-
Replaced
dotenv
withdotenvx
.- The good:
- It enables us to load multiple environment files.
- Just pass the
.env
file path toenv-cmd
before starting the application
- Just pass the
- It enables us to load multiple environment files.
- The bottomline:
- It doesn't affect the existing setup.
- The good:
-
Added docker compose service dependencies.
- The good:
- This imposes the correct startup order
- It makes sure that the database server is responsive, the database is created, and the migrations are run before the server starts.
- The ordering is
db
=>db-init
=>db-migrate
=>api
=>client
. - Also
sendgrid
=>api
.
- This imposes the correct startup order
- The bad:
- This is why we need to move the
.env
file to the root directory.- The
db
healthcheck
makes use of the server environment variables for the database connection. - The
client
healthcheck
command makes use of the client environment variables for the client connection. - The
api
healthcheck
command makes use of the server environment variables for the api endpoint connection.
- The
- This is why we need to move the
- The bottomline:
- It makes containers always run in the same order.
- The good: