How it works: Part 2 - devclub-iitd/DeployBot GitHub Wiki

Current setup

For any setup, there will be a server which runs the deploybot, which needs to be connected to internet for talking to slack and github. In our case, this is a baadal VM named mainvm. It is on this machine where the cloning, building, and pushing images to local registry will take place. We will refer to this as the master server.

Apart from this server, there will be servers on which the deployed services actually run. Here at Devclub we currently have 2 such servers (again baadal VMs) named vm1-internal and vm2-internal. It is these that you actually see in the server options whenever you deploy from slack. These will be referred to as child servers.

Once the built images are pushed into the internal docker registry (consider this as a private docker hub), the master server runs docker-compose up on the child server, which pulls all required images from the internal registry and finally sets the project up and running. In order to remotely run docker commands on another machine, we use docker-machine. This allows master to send docker commands to the docker daemon sitting at the child server. Thus, as a part of setting up deploybot, you’ll have to run a docker-machine create on the master for each child, which creates the necessary certificates to authorize the master to run docker commands on that child server.
Note that since all images are pulled from the local registry, child servers do not need internet access.


We will now go into slight detail for each step in deployment. -

  • Cloning & decrypting env-
    All docker-compose commands are run inside a separate container, to ensure a project does not have access to any of the host volumes (refer to scripts/custom-docker-compose). It is in this container that the project is cloned, after which the .env file is decrypted. To ensure that we are added to the permitted users for git-secret, the deploybot also runs a listener for repo initialization on devclub-iitd, which initializes git-secret and adds ‘devclubiitd’ to the list of permitted users, every time a repo is created (refer to scripts/hooks.sh)
  • Building images and pushing images to registries-
    To understand this process better, let us first understand what different arguments in a docker-compose.yml mean. We will take the docker-compose of SenData (https://github.com/devclub-iitd/SenData) for reference. For each service, we have an optional build argument, and an image argument. The build argument takes the path of dockerfile, and builds the image, tagging it with the value of image argument (read up on docker tag formats here).
    For services that can be used directly (from pre-built images on docker hub, like turnserver in our example), no build argument is needed. However, for the main service that will be running your code (like the node app in our case), you will have to write a dockerfile, which given the cloned repo will run your service.
    The deploy.sh script first runs a docker-compose pull, after which it runs docker-compose build. The former pulls images for services like turnserver which have no build argument, whereas the latter builds images for services like filesend which have a dockerfile provided. Both of these commands are needed in order to make sure all images needed by a project exist. The images are now retagged as {LOCAL_REGISTRY_URL}/{IMAGE_NAME}, after which a docker-compose push is run to push these images to the internal registry. The newly built images (those not directly pulled from docker-hub) are also pushed to the devclubiitd repository on docker hub.
  • Deploying images on the child servers-
    Now that all required images are on the internal registry, a simple docker-compose up on the docker-machine corresponding to the child server is enough to get the services up and running. For this to work, however, we need to provide the internal registry name in the image argument for each service. This is why each service in the docker-compose.yml is prefixed with ${REGISTRY_NAME}, which allows us to specify the URL of the internal registry at run time by modifying the REGISTRY_NAME env variable.
  • Making the appropriate Nginx entries-
    We need to make nginx entries at 2 places- the master server, where all requests to *.devclub.in/ *.devclub.iitd.ac.in first arrive, and the child servers, where the requests need to be mapped to the port on which the deployed service is listening.
    The latter is handled by a tool called nginx-proxy, which runs as a service of its own on each of the child servers, and listens to all docker start/stop events on that machine. For each docker start event, an environment variable VIRTUAL_HOST needs to be provided, which is the sub-domain this service needs to be mapped to. Nginx-proxy makes the appropriate entry in the child server’s nginx according to this argument, as well as the ports your docker-compose has exposed to the host.
    The former, which is simply a reverse proxy entry on the master server, is currently done by populating the fields of a template (refer to scripts/nginx_template) with the subdomain and the child server name.