Docker Development Environment - hackforla/tdm-calculator GitHub Wiki
Docker Development Environment
For development purposes, you may want to run the web api server as a docker container with NODE_ENV="development"
. This allows you to modify node
/express
code and have the container watch for changes to files and automatically restart the express server using nodemon
. This gives you a development experience similar to what you would have if you just ran express on your native machine.
The main advantage of doing development with docker hosting the express server is that the environment will exactly replicate the production environment as far as node and npm versions and versions of all the npm packages.
This uses the docker-compose.yml
docker compose script, which, in turn, uses the Dockerfile.dev
build instructions to build an image that is suitable for running during development. A production build uses a slimmed-down image defined in Dockerfile
and explained in the Deployment section.
Setting up the development environment for external developers
For external developers, this is the best way to get everything running.
Building a dev version of the web api server
-
Copy the
.env.example
filecp .env.example .env
-
Build and run the dev version of the web api server, database, and mock sendgrid email service
docker-compose up -d # add --build flag to rebuild the image: docker-compose up --build # remove -d flag to start in foreground mode: docker-compose up
The frontend is exposed at http://localhost:3001
-
Sendgrid mock service
- User registration will send an email to the mock sendgrid service, which can be found at http://localhost:7000 by default.
- Choose the html view to see the link being sent.
-
Stop the development environment
docker-compose down # add -v flag to remove the volumes: docker-compose down -v
Setting up the development environment for TDM developers
There are two ways to set up the Docker development environment for TDM developers:
- Use docker to run the web client and api server against the shared development database on Azure.
- Use Docker to run the entire stack locally.
.env
file
Development For TDM team members, use the dotenv
file from gDrive in place of .env.example
-
Copy the
dotenv
file from gDrive into the project root# Rename it as .env in the project root mv dotenv .env
Configure Docker settings
Using docker for client and api server only
- Open
.env
and uncomment theDatabase
,Client
, andExternal URLs
sections underDocker settings overrides
- note that all the containers will still run to satisfy the dependency checks, but they will not be used.
Using Docker for the entire stack
- Open
.env
and uncomment everything underDocker settings overrides
Run the development environment
-
Run
docker-compose up
docker-compose up -d # add --build flag to rebuild the image: docker-compose up --build # remove -d flag to start in foreground mode: docker-compose up
The frontend is exposed at http://localhost:3001
-
Sendgrid mock service
- User registration will send an email to the mock sendgrid service, which can be found at http://localhost:7000 by default.
- Choose the html view to see the link being sent.
Stop the development environment
-
Run
docker-compose down
docker-compose down # add -v flag to remove the volumes: docker-compose down -v
docker-compose.yml
description
-
Contains 6 services to run the software
- client
- api
- db-migrate
- db-init
- db
- sendgrid
-
One volume to persist database data
volumes: sqlvolume:
-
Reads
.env
file from the same directory- All variables used in
docker-compose.yml
are defined in.env
. - The
env_file
option in the services specifies the.env
file for each service.
- All variables used in
-
Client service
client: build: context: ./client dockerfile: Dockerfile.dev ports: - "$CLIENT_EXPOSED_PORT:$CLIENT_PORT" env_file: - .env volumes: - ./client/:/usr/app depends_on: api: condition: service_healthy restart: true healthcheck: test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$CLIENT_PORT || exit 1"] interval: 10s retries: 10 start_period: 10s timeout: 3s
- Builds
Dockerfile.dev
image from/client
directory - Runs on port
CLIENT_EXPOSED_PORT
- The container uses environment variables from
.env
file - Mounts
/client
directory to/usr/app
- Depends on
api
service to be healthy - Runs healthcheck every 10 seconds
- checks that
http://localhost:$CLIENT_PORT
is reachable
- checks that
- Builds
-
API service (server)
api: build: context: ./server dockerfile: Dockerfile.dev # uncomment to debug on localhost # ports: # - "5002:$PORT" env_file: - .env volumes: - ./server/:/usr/app depends_on: db-migrate: condition: service_completed_successfully db: condition: service_healthy restart: true sendgrid: condition: service_started healthcheck: test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:$PORT/api/calculations || exit 1"] interval: 10s retries: 10 start_period: 10s timeout: 3s
- Builds
Dockerfile.dev
image from/server
directory - Runs on port 5002 if uncommented.
- There doesn't seem to be a good reason to do this.
- The container uses environment variables from
.env
file - Mounts
/server
directory to/usr/app
- Depends on
db
service to be healthy - Depends on
db-migrate
service to be completed successfully - Depends on
sendgrid
service to be started - Runs healthcheck every 10 seconds
- checks that
http://localhost:$PORT/api/calculations
is reachable
- checks that
- Builds
-
Database migration service
db-migrate: image: redgate/flyway volumes: - ./server/db/migration:/flyway/sql depends_on: db-init: condition: service_completed_successfully db: condition: service_healthy restart: true command: [ "-user=${SQL_USER_NAME:?error}", "-password=${SQL_PASSWORD:?error}", "-url=jdbc:sqlserver://${SQL_SERVER_NAME:?error}:${SQL_SERVER_PORT:?error};databaseName=${SQL_DATABASE_NAME:?error};trustServerCertificate=true", # uncomment for debug messages # "-X", "migrate", ]
- Uses the same image as the
db
service. - Mounts
/server/db/migration
directory to/flyway/sql
, where flyway will look for migration scripts - Depends on
db-init
service to be completed successfully - Depends on
db
service to be healthy - Runs flyway migrate command to apply database migrations.
- Uses the same image as the
-
Database initialization service
db-init: image: "mcr.microsoft.com/mssql/server:2019-latest" volumes: - ./server/db:/db depends_on: db: condition: service_healthy restart: true entrypoint: [ "/bin/sh", "-c", '/opt/mssql-tools18/bin/sqlcmd -C -S ${SQL_SERVER_NAME:?error} -U ${SQL_USER_NAME:?error} -P ${SQL_PASSWORD:?error} -Q "IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = ''tdmdev'') CREATE DATABASE tdmdev; ELSE PRINT ''Database \"${SQL_DATABASE_NAME:?error}\" already exists. Operation successful.''" -b', ]
- Uses the same image as the
db
service. - Mounts
/server/db
directory to/db
. - Depends on
db
service to be healthy - Runs sqlcmd command to create the database
- Uses the same image as the
-
Database
db: image: "mcr.microsoft.com/mssql/server:2019-latest" volumes: - sqlvolume:/var/opt/mssql ports: # exposed on localhost 1434 - "1434:1433" environment: ACCEPT_EULA: Y SA_PASSWORD: Dogfood1! MSSQL_PID: Express healthcheck: test: [ "CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -C -S ${DOCKER_DB_SERVER_NAME:?error} -U ${DOCKER_DB_USER_NAME:?error} -P ${DOCKER_DB_PASSWORD:?error} -Q 'SELECT 1' -b", ] interval: 10s retries: 10 start_period: 10s timeout: 3s
- Uses SQL Server 2019 image
- Mounts volume
sqlvolume
to/var/opt/mssql
- Exposed on port 1434 for external access from DBeaver.
- Sets environment variables for the container.
- Runs healthcheck
- Checks that sqlcmd command can connect to the database
-
Sendgrid
sendgrid: image: ghashange/sendgrid-mock:1.12.0 ports: - "${SENDGRID_EXPOSED_PORT:-7000}:3000" environment: API_KEY: ${SENDGRID_API_KEY:?error}
- Uses
ghashange/sendgrid-mock
image - Exposed on port 7000 unless SENDGRID_EXPOSED_PORT is set
- Sets API_KEY environment variable
- The API_KEY used in the API server needs to match this to authenticate with sendgrid, and it does, because they're using the same environment variable.
- Uses
Helpful tools/commands
-
LazyDocker tool to manage docker containers in the terminal
-
Dive utility to inspect docker images
-
Inspect
docker-compose
config- It's helpful for looking at the docker-compose file after the enivronment variables have been expanded.
docker-compose config
-
Inspect
docker-compose
logsdocker-compose logs
-
Debug docker builds
# Build the api image docker-compose build api --no-cache --progress=plain
- Use the
--no-cache
option to build the image without using the cache. - Use the
--progress=plain
option to avoid the progress bar.
See this page about debugging docker builds.
- Use the
-
Start a shell in a running container
# Start a shell in the running api container. docker-compose exec api sh # The alpine image doesn't have bash, so we need to use the `sh` command. # Start a shell in using the docker image for the api container. docker-compose run --rm api sh # `--rm` removes the container after the shell is closed.
Resources
- flyway docker quick start
- docker-compose startup order
- run SQL Server image with Docker
- flyway check with SQL Server as an example of assigning the url value for SQL Server
- docker CMD vs. ENTRYPOINT
- flyway Dockerfile
- use env vars in flyway parameters
- flyway parameters reference
Decisions made
-
Used
node lts-alpine
base image for the client and api server.- This is the same as in the production Dockerfile.
- The
volta
section inpackage.json
has node 20, which also works.
-
Used
SQL Server 2019
base image for the database.- This is from the wiki instructions, although I've also seen 2017 elsewhere.
- We might want to move to
SQL Server 2022
in the future, but it depends on what the version the production database server is. It's straightforward to change.
-
Used
redgate/flyway
rather thannode-flywaydb
ornode-flyway
.- The good:
- It has a docker image option for ease of use with docker-compose.
- It's the underlying application which the node packages use.
- The bad:
redgate/flyway
is not callable from node like the other options.- Our currently used
node-flywaydb
's repo was archived on 2024-01-20, meaning it's considered outdated. node-flyway
is maintained but doesn't yet have documentation on cli usage.
- The bottomline:
- It's the best choice for docker usage.
- The other options are either outdated or doesn't support cli.
- We really just want to call it from the cli anyway, even though we have a node app.
- It's the best choice for docker usage.
- The good:
-
Added
sendgrid
service- The good:
- This enables running sendgrid email tests locally.
- No need to connect to the network for development
- This enables running sendgrid email tests locally.
- The bottomline:
- Everything works locally inside docker containers.
- The good:
-
Added
db-init
service- The good:
- This enables the database to be created if it doesn't exist.
- It saves a manual step.
- It uses the same database image as the db.
- It saves space and contains all the db utilities.
- This enables the database to be created if it doesn't exist.
- The bad:
- This is why we need to move the
.env
file to the root directory.- The command being run makes use of the server environment variables for the database connection.
- This is why we need to move the
- The bottomline:
- It saves a manual step.
- We can simplify the setup documentation.
- It saves a manual step.
- The good:
-
Added
db-migrate
service- The good:
- This enables running database migrations.
- It saves a manual step.
- It uses the same database image as the db.
- It saves space and contains all the db utilities.
- This enables running database migrations.
- The bad:
- This is why we need to move the
.env
file to the root directory.- The command being run makes use of the server environment variables for the database connection.
- This is why we need to move the
- The bottomline:
- It saves a manual step.
- We can simplify the setup documentation.
- It saves a manual step.
- The good:
-
Use a docker volume for the database data.
- The good:
- This makes the database persist between container restarts.
- The database server can be upgraded without losing data, because the data is stored in a volume.
- This makes the database persist between container restarts.
- The bottomline:
- It lets us test database server upgrades more easily.
- The good:
-
External port number mappings different from the defaults.
- The good:
- It lets us use/test different services from outside the containers.
- It allows us to potentially run the local development environment and the docker containers side-by-side.
- These are value convenient to me, but they can be changed.
client
is exposed on portCLIENT_EXPORTED_PORT
=> 3001- We need this to use the client from a web browser.
api
is not exposed since I can't think of a use case for it- If it ever becomes necessary, I would suggest port 5002 since the local server port is 5001
db
is exposed on portDB_EXPORTED_PORT
=> 1434- An external client can connect to the database server.
sendgrid
is exposed on portSENDGRID_EXPORTED_PORT
=> 7000- This is from the example I used and for no other reason.
- These are value convenient to me, but they can be changed.
- The bottomline:
- It keeps the existing local development environment setup working and allows adding the docker containers running side-by-side.
- The good:
-
Use docker cache mount to reduce rebuild time.
- The good:
- This makes the rebuilds faster.
- It caches the downloaded data, so it doesn't have to be downloaded again.
- This makes the rebuilds faster.
- The bottomline:
- It makes the rebuilds faster.
- The good:
-
Convert
vite.config.js
to read environment variables.- The good:
- This makes it possible to run vite with in different environments.
- The old values are converted into defaults.
- This makes it possible to run vite with in different environments.
- The bottomline:
- This is necessary to make it run in both the local host and in docker.
- The good:
-
Make
.env.example
default to the docker environment.The file is moved from
server/.env.example
to./.env.example
- The good:
- This makes it easier to run in docker.
- It doesn't have the values needed to run locally anyway.
- This makes it easier to run in docker.
- The bottomline:
- The docker environment becomes a 1-step setup for external developers.
- The good:
-
Add a commented out "Docker" section to the gDrive
dotenv
file- The good:
- This maintains a non-breaking change.
- It keeps the local development environment settings as the default while having the docker settings as an option.
- Developers can uncomment the section to run just the app or all in docker.
- This maintains a non-breaking change.
- Future work:
- The docker option should be the default if it works well.
- The bottomline:
- This adds docker as an option to TDM developers.
- The good:
-
There's 6 containers from 5 images.
- The good:
- This is how a docker environment should work.
- Each service runs in its own container.
- This is how a docker environment should work.
- The bad:
- The images take up a lot of space: ~3.5GB total.
- SQL Server 2019 = 1.49GB
- flyway = 1.09GB
- sendgrid-mock = 214MB
- api = 279MB (base
node:lts-alpine
image is ~130MB) - client = 461MB (base
node:lts-alpine
image is ~130MB)
- The images take up a lot of space: ~3.5GB total.
- Future work:
- Can add an
adminer
container for database management.- It moves the functionality into the docker environment, where we currently have to install
DBeaver
or another client separately. - It's low priority but an option.
- It moves the functionality into the docker environment, where we currently have to install
- Can add an
- The bottomline:
- The docker environment becomes a 1-step setup for external developers.
- The good:
-
Replaced
dotenv
withenv-cmd
.- The good:
- It moves the environment variable loading logic from the application to the command line.
- Just pass the
.env
file path toenv-cmd
before starting the application
- Just pass the
- It allows us tomove the
.env
file from the client and server directories into the project root directory.- This lets
docker-compose
use the same.env
file as the containers do. It eliminates repeating the same variables and values in multiple places. For example:PORT
is used by bothdocker-compose
and theapi
container.
- This lets
- It moves the environment variable loading logic from the application to the command line.
- Future work:
dotenvx
is another possibility that is similar toenv-cmd
but made by thedotenv
author and has more features. The documentation is very good.
- The bottomline:
- It doesn't affect the production build.
- The production environment variables are loaded separately.
- It doesn't affect the production build.
- The good:
-
Moved
(root)/server/.env.example
to(root)/.env.example
- The good:
- This is to simplify the
docker-compose
call from the command line.docker-compose up
vs.docker-compose -e .env -e client/.env -e server/.env up
docker-compose
by default looks for.env
in the current directory. If it uses values in client and server.env
files, it will have to load those files using the-e
option.
- This reduces having redundant env variables at different levels.
- The client and server both use the same
PORT
env variable for the server port. - Docker-compose and the server both use the same
SQL_SERVER_NAME
env variable.
- The client and server both use the same
- This is to simplify the
- The bad:
- It makes both projects use the same
.env
file.- There's env info leakage, but that's probably okay for local development. Production doesn't use
.env
files.
- There's env info leakage, but that's probably okay for local development. Production doesn't use
- It makes the client and server projects depend on being under the parent directory.
- This is easy to change back later if nessary, by taking away the
../
from../.env
in the client and serverpackage.json
files.
- This is easy to change back later if nessary, by taking away the
- It makes both projects use the same
- The bottomline:
- It doesn't affect the production build.
- The production environment variables are loaded separately.
- It doesn't affect the production build.
- The good:
-
Added docker-compose service dependencies.
- The good:
- This imposes the correct startup order
- It makes sure that the database server is responsive, the database is created, and the migrations are run before the server starts.
- The ordering is
db
=>db-init
=>db-migrate
=>api
=>client
. - Also
sendgrid
=>api
.
- This imposes the correct startup order
- The bad:
- This is why we need to move the
.env
file to the root directory.- The
db
healthcheck
makes use of the server environment variables for the database connection. - The
client
healthcheck
command makes use of the client environment variables for the client connection. - The
api
healthcheck
command makes use of the server environment variables for the api endpoint connection.
- The
- This is why we need to move the
- The bottomline:
- It makes containers always run in the same order.
- The good: