Deployment - mlalic/TumCampusAppBackend GitHub Wiki

Dependencies

Python Dependencies

The Python-level dependencies are all captured in the requirements.txt and requirements-dev.txt files. The former gives a list of packages required in order to run the project in a production setting, whereas the latter represents an additional list of packages which are useful in a development environment.

The Python dependencies can be installed using pip.

pip install -r tca/requirements.txt

Additionally, it may be useful to install the requirements into a dedicated virtualenv.

These are all regular Python project packaging practices.

However, the dependencies could also be installed at the system level (e.g. through corresponding Debian packages).

System-level Dependencies

In order for the project to successfully run with a full feature set, a number of system-level dependencies need to be installed and set up. These are:

  • Relational Database Management System
  • Mail Transfer Agent
  • Web Server/HTTP Reverse Proxy
  • Message Queue

Relational Database Management System

Thanks to Django's ORM, any of the popular open source database engines can be used with the project. This includes both MySQL, as well as PostgreSQL. SQLite3 is also supported, but its use should be limited to the development environment.

If using MySQL, the mysql-python Python package should be installed to the virtualenv. In order to be able to compile it, the libmysqlclient-dev system package should be installed. In other words, use::

sudo apt-get install libmysqlclient-dev
pip install mysql-python

If using PostgreSQL, the corresponding dependencies are psycopg2 (the Python connector) and libpq-dev (system package to compile new Postgres bindings)::

sudo apt-get install libpq-dev
pip install psycopg2

Once your RDBMS of choice is installed you will need to:

  1. Create a user tca
  2. Create a database tca
  3. Grant the user tca all privileges on the database tca

With PostgreSQL you could do::

$ sudo su postgres
$ createuser -P tca
# Enter the password for the TCA Backend Application user
$ createdb tca
$ psql
postgres=# GRANT ALL ON DATABASE tca TO tca;
postgres=# \q
$ logout

With MySQL you could do::

$ mysql --user=root mysql -p
mysql> CREATE USER 'tca'@'localhost' IDENTIFIED BY 'USER PASSWORD HERE';
mysql> CREATE DATABASE 'tca';
mysql> USE tca;
mysql> GRANT ALL PRIVILEGES ON *.* TO 'tca'@'localhost' WITH GRANT OPTION;
mysql> quit

Mail Transfer Agent

Since the project needs to send out emails (in order to get confirmation of a newly uploaded Public Key), a local MTA needs to be installed, set up, and configured in a way that allows it to send out emails to public domains.

If you use a regular Postfix setup, the default project settings will probably work with no tweaks required.

Web Server

The project is a Web app so, naturally, a Web Server is required to accept incoming HTTP connections and send back the responses. The Web Server needs to be able to delegate requests to the Python application. Any Web Server that is capable of communicating to a WSGI application server is suitable for this purpose. This means that both nginx and Apache Web Server (with mod_wsgi or mod_proxy).

If using Apache, in case mod_wsgi is used, there is no need to use another WSGI container, such as gunicorn, since mod_wsgi takes care of also running the WSGI application itself. In all other cases, the application needs to be ran in a container. The latter approach is recommended.

Message Queue

The project uses celery in order to defer some tasks' execution until after a response to the HTTP requests has been generated. Celery requires access to a message queue which keeps track of queued tasks (and its workers consume those tasks off the queue).

Celery can work with any number of message queues, such as RabbitMQ, Redis, or even a plain database (the latter of which is strongly not recommended).

Using RabbitMQ is recommended, as upon installing it, there is no extra configuration necessary to the TCA Backend project -- all defaults will work straight out of the box.

Configuration

Project Settings

Once the dependencies are installed, there are a number of configuration parameters which should be set before the TCA Backend Application can be successfully started.

The project contains a template for production settings. First, copy the template to a file named local_settings.py in the same directory. Then, fill in the required values in the template, by referring to each value's description found above it.

For the database settings make sure to use the same password you put when creating the tca user.

Initial Database Migration

In order to perform an initial database migration (which creates the database schema based on the models), run the following command from the directory containing the manage.py script.

$ ./manage.py syncdb --noinput

Static Files Set up

In order to server static files (such as stylesheets, JavaScript files, etc.), those files need to be put into a single well-defined directory. Then a management command puts all existing static files to the directory. Perform the following steps from the directory containing the manage.py script::

$ mkdir /path/to/static/files/directory
# Make sure that the same path is set in the ``local_settings.py`` file under the STATIC_ROOT setting!
$ ./manage.py collectstatic --noinput

Running

WSGI Container

If you are using a reverse proxy setup (regardless which reverse proxy server you're using), you should start up a gunicorn instance. You can do that by issuing a command such as::

$ gunicorn tca.wsgi:application -b 127.0.0.1:8000 -w 4 --log-level=debug --log-file=`pwd`/log --daemon

Or, in order to bind the application to a Unix-domain socket, not a loopback TCP socket::

$ gunicorn tca.wsgi:application -b unix:/tmp/tca.socket -w 4 --log-level=debug --log-file=`pwd`/log --daemon

Naturally, the service can be managed by init scripts, systemd, supervisord, or any other daemon manager.

Celery Workers

In order for Celery to be able to process tasks being queued by the TCA Backend Application, a number of Celery workers should be spawned (at least one).

In order to do this, you can issue the following

Reverse Proxy

Nginx

When using nginx, you can use a config such as::

upstream tca {
    # Assuming the gunicorn server is bound to the Unix-domain socket
    server unix:/tmp/tca.socket;
}

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /usr/share/nginx/html;
    index index.html index.htm;

    server_name server-fully-qualified-domain-name;

    location /static {
        alias   /path/to/static/files/directory;
    }

    location / {
            proxy_pass http://tca/;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            # include uwsgi_params;
            # try_files $uri $uri/ =404;
            # Uncomment to enable naxsi on this location
            # include /etc/nginx/naxsi.rules
    }
}

If using the Apache Web Server, you can use a config such as::

<VirtualHost *:80>
    ServerName server-fully-qualified-domain-name

    ServerAdmin webmaster@localhost

    ProxyPass /static !
    Alias /static/ /path/to/static/files/directory/
    ProxyPass / http://localhost:8000/
    ProxyPassReverse / http://localhost:8000/
    ProxyPreserveHost On

    <Directory /home/ubuntu/deployments/static>
            Require all granted
    </Directory>
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

</VirtualHost>


If you use Apache as a reverse proxy, you need to bind the gunicorn socket to a loopback TCP socket, since it does not support proxying to Unix-domain sockets.

Automatic Server Deployment

The project includes a convenience script which can automatically provision a clean Ubuntu/Debian machine with all the project dependencies and then deploy the project. The script is based on the Fabric library for Python.

In order to make use of the script, first install the development requirements (requirements-dev.txt) on your local machine. Then, check-out the project repository and edit the fabfile.py env.conf dictionary to include configuration values pertaining to the server on which the TCA Backend App should be deployed. In particular, this means setting the correct domain name and GCM API key.

Finally, issue the following commands from the root directory of the repo -- still on your local host. (Make sure that any possibly used virtualenvs are active.)

$ cd scripts/
$ fab provision -H [email protected]
$ fab deploy -H [email protected]

The deployment is based on the nginx/gunicorn/postgres combination. It also automatically sets up a supervisord configuration for both the celery worker, as well as the gunicorn server, meaning that they can be managed (started/restarted/stopped) very conveniently through supervisorctl. They are also automatically restarted if the system itself restarts.

⚠️ **GitHub.com Fallback** ⚠️