Docker Compose - ghdrako/doc_snipets GitHub Wiki

kompose can translate Docker Compose files into Kubernetes configurations.

Example

Install

On Mac and Windows systems, docker-compose is already a part of Docker Desktop. On Linux systems, you need to install the docker-compose CLI tool after installing Docker Engine.

  1. Download the binary to /usr/local/bin with the following command in your Terminal:
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo curl -L "https://github.com/docker/compose/releases/\
download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o \
/usr/local/bin/docker-compose

_If you have Python and pip installed, you can also use pip to install docker-compose using the following command:

pip install docker-compose_
  1. Make the downloaded binary executable with the following command:
$ sudo chmod +x /usr/local/bin/docker-compose
  1. Test the CLI and installation with the following command in the Terminal on all operating systems:
$ docker-compose version

Alias

$ alias dc='docker-compose'
PS C:\> New-Alias dc docker-compose.exe

Command

Main actions available for docker-compose:

  • config: This action will check and show a review of the Compose YAML file. It can be used in combination with --services or --volumes arguments to retrieve only these objects.--profile can be used to specifically retrieve information about a certain set or group of objects.
  • images: This shows the images defined in our Compose YAML file. This will be useful if you are wondering whether images will need to be built or may already be present in your environment.
  • build:Images created using docker-compose will include the project’s name in its name; hence, they will be identified as <PROJECT_NAME>-SERVICE_NAME>. A Dockerfile should be included in all component directories, although we can override the building of certain images by specifying an image repository directly. We can modify the build context and the Dockerfile filename using the context and dockerfile keys, respectively. If our Dockerfile contains various targets, we can define which one will be used for building the service’s image by using the target key. Arguments can also be passed to the build process to modify the environment by using the args key with a list of the key-value pairs that should be included.
  • pull/push: The images defined can be downloaded all at once and the build definitions can also be pushed to remote registries once your images are created.
  • up: This action is equivalent to executing docker run for each component/service defined in our Compose YAML file. By default, docker compose up will start all the containers at once and our terminal will attach to all containers’ outputs, which may be interesting for testing but not for production (our terminal will be stuck attached to the processes, and we must use Ctrl + P + Q to detach from them). To avoid this situation, we should use the -d or --detach argument to launch our containers in the background. docker-compose also supports the run action, but this is generally used for running specific services one at a time.
  • down: This action, as expected, does the opposite of up; it will stop and remove all the containers that are running. It is important to understand that new containers will be created if they were previously removed by using this action. Any persistent data must be stored outside of the container’s life cycle. To completely remove your application, remember to always remove the associated volumes. We can add the --volumes argument to force the removal of any associated volume.
  • create/run/start/stop/rm: Equivalent docker comand
  • ps: As we are running multiple containers for a project, this action will list all associated containers. Containers’ performances can be reviewed by using docker-compose top
  • exec: This option allows us to execute a command attached to one of the containers (in this case, a project’s service).
  • logs: We can use docker-compose logs to retrieve all the project’s container logs. This is very useful for retrieving all application logs by using a single point of view and just one command. Using --follow continuously follows all of them. We can retrieve just one service log by adding the service name as an argument.

Important note Although you will usually execute docker-compose actions against all containers, it is possible to specify one service at a time by adding the specific service name, docker-compose <ACTION> <SERVICE>. This option is extensible to almost all commands and very useful for debugging purposes when things go wrong with some containers.

docker compose ps # to list service name
docker compose up -d --no-deps --build <service_name>  # rebuild single service

Build

docker-compose --project-name test build --progress quit --quiet
docker compose up --build 2>&1 | tee debug.log

The --build argument makes Docker Compose build each of our images before instantiating containers from these. First time you invoke the up command it builds your images anyway. At other times (without the --build argument), the up command just starts our container from the image that was previously built (this can be a quick way to restart if you don’t want to rebuild). This means that if we change some code in our microservice and invoke the up command again, it won’t include our changes unless we use the --build argument.

Push

docker-compose --project-name test push --include-deps app

it is useful to use the --include-deps argument to push all the images of the services defined in the depends_on key. This will ensure that your service will not miss any required images when it is executed.

docker-compose --help
docker-compose ls      # List running compose projects
docker-compose up
docker-compose up -d   # up in detach/demonize mode
docker-compose ps
docker-compose down # top all containers and will also remove the associated containers, networks, and volumes that were created when docker-compose up
docker-compose -f docker-compose-example.yml build publisher  # build only publisher container but not start any 
docker-compose -f docker-compose.yml -f docker-compose-simple.yml up # using inheritence. Settings from docker-compose-simple.yml overwrite docker-compose.yml
docker-compose logs <options> <service>
docker-compose logs
docker-compose logs --tail=n
docker-compose logs -f      # continue print log - follow mode
docker-compose logs -f service-name   # continue print log specific service
docker-compose stop
docker-compose start # resume the stopped containers
docker-compose exec <service> <command> # lets you run ad hoc commands on any of the containers.
docker-compose config # validate the Compose file - return parsed version
docker compose rm     # Removes stopped service containers.
docker compose rm -s  # --stop  Stop container before removing if needed
docker compose rm -v  # --volume Remove any anonymous volumes attached to containers 
docker compose down   # by default remove 1 Containers for services defined in the Compose file; 
                      #                   2 Networks defined in the networks section of the Compose file
                      #                   3 The default network, if one is used
docker compose down --rmi # Remove images used by services.
docker compose down -v   # --volume Remove named volumes declared in the volumes section of the Compose file and anonymous volumes attached to containers.
docker compose down --remove-orphans # Remove containers for services not defined in the Compose file.

Docker Compose works with the docker-compose.yaml and docker-compose.yml file extensions by default. Multiple Compose files can be used at the same time, and the order in which they appear will define the final file specification to use. Values will be overridden by the latest ordered file. We can also use variables that can be expanded in runtime by setting our environment variables. This will help us use a general file with variables for multiple environments.

You may have more than one YAML file for defining your application’s components, but the docker- compose command will merge them into a single definition. We will simply use docker-compose up to launch our complete application, although we can manage each component separately by simply adding its service’s name. docker-compose will refresh the components’ status and will just recreate those that have stopped or are non-existent.

Containers will always be created if they don’t exist when we execute docker-compose up. But in some cases, you will need to execute a fresh start of all containers (maybe some non-resolved dependencies in your code may need to force some order or reconnection). In such situations, we can use the --force-recreate argument to enforce the recreation of your services’ containers.

The basic schema of a Compose YAML file will be presented as follows:

services:
service_name1:
<SERVICE_SPECS>
...
service_nameN:
<SERVICE_SPECS>
volumes:
volume_name1:
<VOLUME_SPECS>
…
volume_nameN:
<VOLUME_SPECS>
networks:
network_name1:
<NETWORK_SPECS>
…
network_nameN:
<NETWORK_SPECS>

Each service will need at least a container image definition or a directory where its Dockerfile is located.

Section

  • version - latest syntax version is 3. Currently, the version key is only informative, added for backward compatibility. If some keys are not allowed in the current Compose release, we will be warned, and those keys will be ignored. At the time of writing this book, Compose YAML files** do not require this version key**.
  • services - This section describes the Docker containers that will be built if needed and will be started by docker-compose.
  • networks - This section describes the networks that will be used by the services.
  • volumes - This section describes the data volumes that will be mounted to the containers in services

Services

For the services section, there are two essential options to create containers.

  • The first option is to build the container,
$ tree
|
--- docker-compose.yml
|
--- server
    |
    ---- Dockerfile-server

Use Dockerfile

version: "3"
services:
  server:
    build:
      context: ./server
      dockerfile: Dockerfile-server     # only nessesery if using custom dockerfile name "Dockerfile-server" in this example but  by default it looking "Dockerfile"
  • Use an image from the Docker registry
version: "3"
services:
  server:
    image: nginx

Containers associated with a service will be named after the service name, including the project as a prefix. Each container is considered an instance for the service and will be numbered; hence, the final container will be named <PROJECT_NAME>-<SERVICE_NAME>-<INSTANCE_NUMBER>.

We can have more than one instance per service. This means that multiple containers may run for a defined service. We will use --scale SERVICE_NAME=<NUMBER_OF_REPLICAS> to define the number of replicas that should be running for a specific service.

Dynamic names will be used for the service containers, but we can use the container_name key to define a specific name. This may be interesting for accessing a container name from other containers, but this service wouldn’t be able to scale because container names are unique for each container runtime; thus, we cannot manage replicas in this situation.

Network

Docker Compose creates a single network by default, and each container connects to this network. In addition, containers can connect to other containers using hostnames.

docker-compose.yaml file in the webapp folder:

version: "3"
services:
  server:
    image: nginx
  db:
    image: postgres
    ports:
      - "8032:5432"
services:
database:
image: nginx
ports:
- "80"

docker-compose creates the server and db containers and joins the webapp_default network with the names server and db.

the server container can connect to the database using its container port and hostname as follows: postgres://db:5432. Similarly, the database is reachable from the host machine by host port 8032 as follows: postgres://localhost:8032.

internal DNS is available for all the communications provided in the internal network, and services will be published with their defined names. This is key to understanding how your application’s components will know each other. You shouldn’t use their instance name (<PROJECT_NAME>_<SERVICE_NAME>_<INSTANCE>); we will instead use the service’s name to locate a defined service. For example, we will use db in our app component connection string. Take care because your instance name will also be available but shouldn’t be used. This will really break the portability and dynamism of your applications if you move them to clustered environments where instances’ names may not be usable or if you use more than one replica for some services.

Volumes

version: "3"
services:
  database:
    image: my-db-service
    volumes:
      - data:/database
  backup:
    image: my-backup-service
    volumes:
      - data:/backup
volumes:
  data:

docker-compose will create a volume named data using the default volume plugin in Docker Engine. This volume will be mounted to the /database path of the database container and the /backup path of the backup container.

version: "3"
services:
  redis-backup:
    image: bash
    entrypoint: ["/snapshot-backup.sh"]
    depends_on:
      - redis
    environment:
      - BACKUP_PERIOD=10
    volumes:
      - ./snapshot-backup.sh:/snapshot-backup.sh
      - redis-data:/data:ro
      - backup:/backup
volumes:
  redis-data:
  backup:

We have two volumes and on file bind mount.The Redis backup volume will be read-only.

It is possible to use bind mounts as well. Instead of referring to the named volume, all you have to do is provide the path.

services:
  database:
    image: mysql
    environment:
       MYSQL_ROOT_PASSWORD: dontusethisinprod
    volumes:
      - ./dbdir:/var/lib/mysql

The volume key has a value of ./dbdir:/var/lib/mysql, which means Docker will mount dbdir in the current directory to the /var/lib/mysql directory of the container. Relative paths are considered in relation to the directory of the Compose file.


Enviroment Variables

The method of configuring services in Docker Compose is to set environment variables for the containers.

There are three methods of defining environment variables in Docker Compose, with the following priority:

  1. Using the Compose file
  2. Using shell environment variables
  3. Using the environment file

If the environment variables do not change very often but are required by the containers, it is better to store them in docker-compose.yaml files. If there are sensitive environment variables, such as passwords, it is recommended to pass them via shell environment variables before calling the docker-compose CLI. However, if the number of the variables is high and varies between the testing, staging, or production systems, it is easier to collect them in .env files and pass them into docker-compose.yaml files.

Set variable in docker-compose.yaml

server:
  environment:
    - LOG_LEVEL=DEBUG
    - METRICS_PORT=8444

Get variable from shell enviroment

server:
  environment:
    - HOSTNAME

From env file

database.env:
DATABASE_ADDRESS=mysql://mysql:3535
DATABASE_NAME=db


server:
 env_file:
    - database.env

services:
  app:
    image: mysql
    env_file:
      - common.env
      - app.env
      - secrets.env
...services:
  app:
...
    configs:
    - source: appconfig
      target: /app/config
      uid: '103'
      gid: '103'
      mode: 0440
volumes:
…
networks:
...
configs:
  appconfig:
    file: ./appconfig.txt

In this example, we changed all the app component environment variables for a config object, which will be mounted inside the container.

By default, config object files will be mounted in / if no target key is used. Although there is a short version for mounting config object files inside service containers, it is recommended to use the presented long format, as it allows us to specify the complete paths for both source and target, as well as the file’s permissions and ownership.

Secrets

Secret objects are only available in swarm mode. This means that even if you are just using a single node, you must execute docker swarm init to initialize a single-node Swarm cluster. This will allow us to create secrets, which are stored as cluster objects by the Docker container engine. Compose can manage these objects and present them in our service’s containers. By default, secrets will be mounted inside containers in the /run/secrets/<SECRET_NAME> path, but this can be changed, as we will see in the following example. First, we create a secret with the database password, used in the db service, by using docker secret create:

$ printf "mysecretdbpassword" | docker secret create postgres_pass -
dzr8bbh5jqgwhfidpnrq7m5qs

Then, we can change our Compose YAML file to include this new secret:

…
  db:
    build: simplestdb
    image: myregistry/simplest-lab:simplestdb
    environment:
      - POSTGRES_PASSWORD_FILE: /run/secrets/postgres_pass
    secrets:
      - postgres_pass
…
secrets:
  postgres_pass:
    external: true

In this example, we created the secret using the standard output and we used external: true to declare that the secret is already set and the container runtime must use its key store to find it. We could have used a file instead as the source. It is also common to integrate some files as secrets inside containers by adding them in the following format:

secrets:
  my_secret_name:
    file: <FULL_PATH_TO_SECRET_FILE>

The main difference here is that you may be using a plain text file as a secret that will be encrypted by the Docker container runtime and mounted inside your containers. Anyone with access to this plain text file will read your secrets. Using the standard output increases security because only the container runtime will have access to the secret object. In fact, the Docker Swarm store can also be encrypted, adding a new layer of security.

From default env file

You can set default values for environment variables using a .env file, which Compose automatically looks for in project directory (parent folder of your Compose file). Values set in the shell environment override those set in the .env file.

Note when using docker stack deploy The .env file feature only works when you use the docker-compose up command and does not work with docker stack deploy.


Service Dependency

depends_on key is only available in Compose YAML files. When you deploy your applications in Docker Swarm or Kubernetes, the dependencies can’t be managed in the same way.

Compose manages dependencies for you, but this feature** does not exist in orchestrated environments and your applications should be prepared for that**. You may, for example, verify the connectivity with your database component before executing certain tasks, or you might manage the exceptions in your code related to the loss of this connection. Your application component may need several components, and you should decide what your application has to do if one of them is down. Key application components should stop your code in case full application functionality breaks.

Caution With the depends_on key, docker will only bring up the services in the defined order; it will not wait for each service to be ready and then bring up the successive service. Docker Compose starts the containers in the order of init, pre, and main

version: "3"
services:
  init:
    image: busybox
  pre:
    image: busybox
    depends_on:
        - "init"
  main:
   image: busybox
    depends_on:
        - "pre"
version: "3"
services:
  clean:
    image: busybox
    command: "rm -rf /static/index.html"
    volumes:
      - static:/static
  init:
    image: busybox
    command: "sh -c 'echo This is from init container >> /static/index.html'"
    volumes:
      - static:/static
    depends_on:
    - "clean"
  pre:
    image: busybox
    command: "sh -c 'echo This is from pre container >> /static/index.html'"
    volumes:
      - static:/static
    depends_on:
    - "init"
  server:
    image: nginx
    volumes:
      - static:/usr/share/nginx/html
    ports:
      - "8080:80"
    depends_on:
    - "pre"
volumes:
  static:

Inheritance using multiple configuration files

Docker Compose allows us to apply multiple YAML files in a sequence where the next configuration overrides the last. That way, we can have separate override files for various environments and manage multiple environments using a set of these files.

docker-compose -f docker-compose.yml -f docker-compose-simply.yml up -d
docker-compose -f docker-compose.yml -f docker-compose-simple.yml -f dockercompose-simple-dev.yml up -d

We can use three or more configuration files as well. Each additional file specified on the command line further extends the containers and options specified within.

Do a Clean Restart docker

Procedure:

  • Stop the container(s) using the following command:
docker-compose down
  • Delete all containers using the following command:
docker rm -f $(docker ps -a -q)
  • Delete all volumes using the following command:
docker volume rm $(docker volume ls -q)
  • Restart the containers using the following command:
docker-compose up -d

Example

version: "3"
services:
  postgres:                                   # name using by container to talk each other insteed  ephemeral ip addres so in other container connection string will be postgresql://postgres:5432
    container_name: class_postgres_4          # user readable container name
    image: postgres:11.1
    restart: unless-stopped                   # default "no"otervoption "always" restart policy to apply when container exit - what to do when container exit
    networks:                                 # "unless-stoped" - restart containers  except when we use command docker container stop
      - my-net                                # to which network we attach container
    ports:
      - "15432:5432"                          # <host port>:<container port>
    environment:                              # section define enviromet variables set in container
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "${MY_PG_PASS}"      # value of enviroment variable MY_PG_PASS is look in default .env file  - defined below
      POSTGRES_DB: "gogs"
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres", "-d", "gogs"]  # pg_isready is build in command
      interval: 10s                                                # how frequently we execute above command
      timeout: 5s                                                  # how long we wait for response
      retries: 5                                                   # how many times we try when fail
  gogs:
    container_name: class_gogs_4
    image: gogs/gogs:0.11.66
    restart: unless-stopped
    networks:
      - my-net
    depends_on:                                                    
      # https://docs.docker.com/compose/compose-file/compose-file-v2/#depends_on
      # https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
      - postgres                                                 # gogs depends on postgres - this is why we add health check to postgres - to gogs be sure that postgres is running. Without health checks googs container will start immediately after postgres  
    ports:
      - "10022:22"
      - "10090:3000"
    volumes:
     - "../../../gogs/logs:/app/gogs/log"
  registry:
    container_name: class_registry_4
    image: registry:2.6.2
    restart: unless-stopped
    networks:
      - my-net
    ports:
      - "5000:5000"
    environment:
      REGISTRY_HTTP_TLS_CERTIFICATE: "/certs/domain.crt"
      REGISTRY_HTTP_TLS_KEY: "/certs/domain.key"
      REGISTRY_HTTP_SECRET: "iuehfio73bt8dobq"
    volumes:
     - "../../../registry/config.yaml:/etc/docker/registry/config.yaml"
     - "../../../registry/htpasswd:/htpasswd"
     - "../../../registry/data:/var/lib/registry"
     - "../../../registry/certs:/certs"
  jenkins:
    container_name: class_jenkins_4
    build:                                                # build section
      context: ../../jenkins/docker                       # in context we point directory with Dockerfile to build container
    # If you can't build, use this image: `spkane/dc-201-jenkins:latest`
    #image: spkane/dc-201-jenkins:latest
    restart: unless-stopped
    networks:
      - my-net
    ports:
      - "10091:8080"
      - "50000:50000"
    volumes:
     - "../../jenkins/data:/var/jenkins_home"
     - "/var/run/docker.sock:/var/run/docker.sock"        # file to conect with docker server      
networks:
  my-net:
    driver: bridge               # create custom network separate from outside

echo "MY_PG_PASS=passw0rd" > ./.env  #linux
Add-Content ./.env "MY_PG_PASS=passw0rd"

Dependencies in production

When you change a particular container and want to redeploy the container, docker- compose also redeploys any dependencies. In production, this might not be something that you wish to do, and therefore, you can override this behavior by using the following command:

$ docker-compose up --no-deps -d <container_service_name>

Profiles

By using profiles, we will be able to define which objects should be created and managed. This can be very useful under certain circumstances – for example, we can define one profile for production and another for debugging. In this situation, we will execute docker-compose --profile prod up --detach to launch our application in production, while using --profile debug will run some additional components/services for debugging. We will use the profile key in our Compose YAML file to group services, which can be added to multiple profiles. We will use a string to define these profiles and we will use it later in the docker-compose command line. If no profile is specified, docker-compose will execute the actions without using any profile (objects with no profile will be used).

version: "3.9"
services:
  web:
    image: nginx
    ports:
      - "8080:80"
  db:
    image: postgres
    environment:
      POSTGRES_PASSWORD: secretpassword  
    profiles:
      - dev   
  mysqldb:
    image: mysql:8.0
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: testrootpassword
      MYSQL_DATABASE: test
      MYSQL_USER: test
      MYSQL_PASSWORD: testpassword
    profiles:
      - prod
    volumes:
      - db_data:/var/lib/mysql
volumes:
  db_data:
$ docker-compose --profile=prod up

In the above case, mysqldb and web services will start running. The mysqldb service will start due to the prod profile in the configuration. In addition, the web service will also run as services without profiles are always enabled.

Health check

We can put the health check command directly into the docker-compose file instead of Dockerfile, as in some cases we won't get access to the docker file health check.

  • wget
healthcheck:
  test: wget --no-verbose --tries=1 --spider http://localhost:80 || exit 1
  interval: 60s
  retries: 5
  start_period: 20s
  timeout: 10s
  • curl
healthcheck:
  test: curl --fail http://localhost:80 || exit 1
  interval: 60s
  retries: 5
  start_period: 20s
  timeout: 10s

Replicas We can manage the number of container replicas for a service by using the replicas key. These replicas will run in isolation, and you will not need a load balancer service to redirect the service requests to each instance. Consider the following example:

services:
  app:
…
    deploy:
      replicas: 2
…

In such a situation, two containers of our app service will be launched. Docker Swarm and Kubernetes will provide TCP load balancer capabilities. If you need to apply your own balancer rules (such as specific weights), you need to add your own load balancer service.

Managing multiple environments with Docker Compose

We can define a .env file with all the variables we are going to use in a Compose YAML file defined as a template. Docker Compose will search for this file in our project’s root folder by default, but we can use --env-file <FULL_FILE_PATH> or the env_file key in our Compose YAML file. In this case, the key must be set for each service using the environment file:

env_file:
- ./debug.env

The environment file will overwrite the values defined in our images. Multiple environment files can be included; thus, the order is critical. The lower ones in your list will overwrite the previous values, but this also happens when we use more than one Compose YAML file. The order of the arguments passed will modify the final behavior of the execution.

Example

services:
  lb:
    build:
      context: ./simplestlb
      args:
        alpineversion: "1.14"
        dockerfile: Dockerfile.${environment}
      labels:
        org.codegazers.description: "Test image"
      image: ${dockerhubid}/simplest-lab:simplestlb
      environment:
        - APPLICATION_ALIAS=simplestapp
        - APPLICATION_PORT=${backend_port}
      networks:
        simplestlab:
          aliases:
          - simplestlb
      ports:
        - "${loadbalancer_port}:80"

.env file:

environment=dev
dockerhubid=frjaraur
loadbalancer_port=8080
backend_port=3000

Show configuration

docker-compose --project-name test --file myapp-docker-compose.yaml config
⚠️ **GitHub.com Fallback** ⚠️