Using S3 for configuration and Docker volumes - tooltwist/documentation GitHub Wiki
Amazon ECS uses one or more Amazon EC2 instances (servers) to run the Docker containers of our applications.
Many of the Docker containers require configuration information to be provided via Volumes, which we maintain in our repo configs.tooltwist.com.
Docker running on these EC2 instances also needs to know the credentials to access Dockerhub, so our Docker images can be pulled down from our private ToolTwist Dockerhub repo.
We allow all the EC2 instances in our cluster have the same config files for volumes, and the same credentials for Dockerhub, we use an S3 bucket to contain the config files, and mount the S3 bucket of each of the instances at the time it boots.
We can also update the S3 bucket from a local machine using a command named s3cmd, and have those changes appear on all the instances in the cluster.
Check that you have the production configs for all the applications that will be run on the cluster, pulled from http://configs.tooltwist.com into ~/Configs/<projectname>
and ~/Configs-prod/<projectname>
.
Create an S3 bucket named ecs-<clustername>-<mode>, where mode is one of dev, sit, stage, or prod.
https://ap-southeast-1.console.aws.amazon.com/console/home?region=ap-southeast-1#
We use a utility named s3cmd to sync a local folder with the remote S3 bucket.
If it is not already installed on your local machine, install s3cmd with these instructions.
mkdir -p ~/Configs/Clusters/<clustername>
cd ~/Configs/Clusters/<clustername>
Note: Use
~/Configs-prod
for a staging or production cluster.
Create an configuration file for ECS, named ecs.config
. This file will be used to pass our Docker credentials to the ECS agent so it is able to download our private Dockerhub images.
ECS_CLUSTER=<clustername>
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"auth":"zq212MzEXAMPLE7o6T25Dk0i","email":"[email protected]"}}
You can get the strings above by using docker login
to set your local credentials, then run cat ~/.docker/config.json
to see a string similar to above. More details can be found here, but don't use the format with username and password in the file.
Below we'll configure the EC2 instances to install this file from the S3 bucket, onto the instance, at the time the server starts. At this point however we are just creating the file, so we can sync it onto the S3 bucket.
Create a repo for this cluster at http://configs.tooltwist.com with the name ecs-clusters/<CLUSTER-NAME>
and upload this new directory.
git init
git add .
git commit -m "Initial config"
git remote add origin http://[email protected]/r/ecs-clusters/<CLUSTER-NAME>.git
git push -u origin master
Create a script to sync using s3cmd in sync.sh
. This should sync the ecs.config file, and also each of the production config directories for the projects that will be run in the cluster (but not other projects).
#!/bin/bash
#
# Synchronise configs to S3 bucket
#
BUCKET=s3://ecs-devel-b-configs
# ECS Config
s3cmd sync --delete-removed ecs.config ${BUCKET}/ecs.config
# Project configs
s3cmd sync --delete-removed ~/Configs-prod/drinkcircle/prod ${BUCKET}/Configs-prod/drinkcircle/prod/
Now run the script:
$ chmod +x sync.sh
$ ./sync.sh
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
'ecs.config' -> 's3://ecs-devel-b-configs/ecs.config' [1 of 1]
161 of 161 100% in 0s 457.46 B/s done
Done. Uploaded 161 bytes in 1.0 seconds, 161.00 B/s. Copied 0 files saving 0 bytes transfer.
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
...
Later, this script can be run every time any of the production configs are changed.
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html#ecs-config-s3
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
https://www.jacksoncage.se/posts/2013/04/08/Sync-files-to-S3-on-Mac-OS-X/