How to deploy Codabench on your server - codalab/codabench GitHub Wiki
This document focuses on how to deploy the current project to the local machine or server you are on.
- Pull codabench develop branch code locally
- Installing docker and common docker commands
- Modify .env file configuration
- Open Access Permissions for following port number
- Modify django-related configuration
- Start service
- Set public bucket policy to read/write
- Checkout the log of the specified container
- Stop service
- Disabling docker containers on production (disabling local MinIO, local compute worker, etc.)
- Link compute workers to default queue
- Personalize the main banner
- Frequently asked questions (FAQs)
You need to complete the following steps
- Generate ssh keys on your machine.
- Add the ssh key into the Github account.
- Cloning the repository locally.
- git clone https://github.com/codalab/codabench.git
-
sudo snap install docker
: installing docker on ubuntu or - Install the official version of docker: (https://docs.docker.com/engine/install/ubuntu/)
- Go to the folder where codabench is located
-
cp .env_sample .env
: generate an.env
file to set the environment variables required by the service - Get the IP address of the local machine
- The following commands will get you the private IP address of your interfaces:
ifconfig -a.
- **
ip** addr (**ip** a)
hostname -I | awk '{print $1}'
nmcli -p device show.
- The following commands will get you the private IP address of your interfaces:
- Replace the value of IP address in the following environment variables according to your infrastructure configuration:
-
SUBMISSIONS_API_URL
can be https://codabench-test.yourdomain.com/api or localhost or simply http://django:8000/api -
DOMAIN_NAME
can be codabench-test.yourdomain.com or simply localhost:80 -
AWS_S3_ENDPOINT_URL
can be https://minio-test.yourdomain.com or simply http://minio:9000 - Notice: If you are deploying on an azure machine, then AWS_S3_ENDPOINT_URL needs to be set to an IP address that is accessible on the external network
-
If you are deploying on a Linux server, which usually has a firewall, you need to open access permissions to the following port numbers
-
5672
: rabbit mq port -
8000
: django port -
9000
: minio port
-
Go to the folder where codabench is located
-
Go to the settings directory and modify
[base.py](http://base.py)
filecd src/settings/
vi base.py
-
Change the value of
DEBUG
toTrue
DEBUG = os.environ.get("DEBUG", True)
- notice: If not set to true, then you will not be able to load to the static resource file
-
Comment out the following code
- Execute command
sudo docker compose up -d
- Check if the service is started properly
sudo docker compose ps
codabench_compute_worker_1 "bash -c 'watchmedo …" running
codabench_caddy_1 "/bin/parent caddy -…" running 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 2015/tcp
codabench_site_worker_1 "bash -c 'watchmedo …" running
codabench_django_1 "bash -c 'cd /app/sr…" running 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp
codabench_flower_1 "flower" restarting
codabench_rabbit_1 "docker-entrypoint.s…" running 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp
codabench_minio_1 "/usr/bin/docker-ent…" running 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp
codabench_db_1 "docker-entrypoint.s…" running 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
codabench_builder_1 "docker-entrypoint.s…" running
codabench_redis_1 "docker-entrypoint.s…" running 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp
- Create the required tables in the database:
sudo docker compose exec django ./manage.py migrate
- Eventually generate mock data:
sudo docker compose exec django ./manage.py generate_data
- Generate the required static resource files:
sudo docker compose exec django ./manage.py collectstatic --noinput
This can easily be done via the minio web console (local URL: minio:9000)
The following commands can help you debug
-
sudo docker logs -f codabench_django_1
: checkout django container logs in the docker-compose service -
sudo docker logs -f codabench_site_worker_1
: checkout site-worker container logs in the docker-compose service -
sudo docker logs -f codabench_compute_worker_1
: checkout compute-worker container logs in the docker-compose service -
sudo docker logs -f codabench_minio_1
: checkout minio container logs in the docker-compose service
- Execute command
sudo docker compose down --volumes
To override settings on your production server, create a docker-compose.override.yml
in the codabench
root directory.
If on your production server, you are using remote MinIO or another cloud storage provider then you don't need minio container.
If you have already buckets available for your s3 storage, you don't need createbuckets container.
Therefore, you should disable minio and createbuckets containers. You may also want to disable the compute worker that is contained in the main server compute, to keep only remote compute workers.
Add this to your docker-compose.override.yml
:
version: '3.4'
services:
compute_worker:
command: "/bin/true"
minio:
restart: "no"
command: "/bin/true"
createbuckets:
entrypoint: "/bin/true"
restart: "no"
depends_on:
minio:
condition: service_started
This forces the worker to close instantly instead of running on the web host.
The default queue of the platform runs all jobs, except when a custom queue is specified by the competition or benchmark. By default, the compute worker of the default queue is a docker container run by the main VM. If your server is used by many users and receives several submissions per day, it is recommended to use separate compute workers and to link them to the default queue.
To set up a compute worker, follow this guide:
https://github.com/codalab/codabench/wiki/Compute-Worker-Management---Setup
In the .env
file of the compute worker, the BROKER_URL
should reflect settings of the .env
file of the platform:
BROKER_URL=pyamqp://<RABBITMQ_DEFAULT_USER>:<RABBITMQ_DEFAULT_PASS>@<DOMAIN_NAME>:<RABBITMQ_PORT>/
HOST_DIRECTORY=/codabench
BROKER_USE_SSL=True
The main banner on the Codabench home page shows 3 organization logos
- LISN (https://www.lisn.upsaclay.fr/)
- Université Paris-Saclay (https://www.universite-paris-saclay.fr/)
- CNRS (https://www.cnrs.fr/)
You can update these by:
- Replacing the logos in
src/static/img/
folder - Updating the code in
src/templates/pages/home.html
to point to the right websites of your organizations
Exception detail (by using sudo docker logs -f codabench_django_1
)
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 165, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 193, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserInvalidMethodError: invalid HTTP method
[2021-02-09 06:58:58 +0000] [14] [WARNING] Invalid HTTP request received.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 165, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 193, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserInvalidMethodError: invalid HTTP method
Solution
-
First, modify the
.env
file and setDJANGO_SETTINGS_MODULE=settings.develop
-
Then, restart services by using following docker-compose command
sudo docker compose down --volumes
sudo docker compose up -d
Solution: Change the value of the DEBUG parameter to True
vi competitions-v2/src/settings/base.py
DEBUG = os.environ.get("DEBUG", True)
- Also comment out the following code in
base.py
Exception detail (by checkout google develop tools)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "[http://docker.for.mac.localhost:9000/private/dataset/2021-02-18-1613624215/24533cfc523e/competition.zip](http://docker.for.mac.localhost:9000/private/dataset/2021-02-18-1613624215/24533cfc523e/competition.zip)"
Solution: Set AWS_S3_ENDPOINT_URL to an address that is accessible to the external network
vi codabench/.env
Make sure the IP address and port number is accessible by external network, You can check this by :
telnet {ip-address-filling-in AWS_S3_ENDPOINT_URL} {port-filling-in AWS_S3_ENDPOINT_URL}
- Make sure the firewall is closed on port 9000
ps: This problem may also be caused by a bug in MinIO, you need to do the following steps
- Upgrade the minio docker image to the latest version
- Delete the previous minio directory folder in your cdabench code under /var/minio directory
- Stop the current minio container
- Delete the current minio container and the corresponding image
- Re-execute docker compose up -d
Check bucket policy of public minio bucket: read/write access should be allowed.
This can easily be done via the minio web console (local URL: minio:9000)
This issue may be encountered when starting a docker container in a compute worker, the problem is caused by the installation of snap docker (if you are using Ubuntu).
Solution
- Uninstall snap docker
- Install the official version of docker
Codabench uses Caddy to manage HTTPS and secure Codabench. What you need is a valid DNS pointed towards the IP address of your instance.
To secure MinIO, you should install a reverse-proxy, e.g: Nginx, and have a valid SSL certificate. Here is a tutorial sample:
Secure MinIO with Certbot and Letsencrypt
Don't forget to update your AWS_S3_ENDPOINT_URL parameter
Update it to AWS_S3_ENDPOINT_URL=https://<your minio>
Summary:
- Use same SSL certs from letsencrypt (certbot) but change fullchain.pem -> public.crt and privkey.pem -> private.key. I copied from ./certs/caddy (for django/caddy) to ./certs/minio/certs.
- You need to change the command for minio to "server /export --certs-dir /root/.minio/certs" and not just "server /export"
- Mount in certs:
- Add "- ./certs/minio:/root/.minio" under the minio service's "volumes" section
- Certs must be in /${HOME}/.minio and for dockers ends up being /root/.minio
- Edit the .env with minio cert location:
MINIO_CERT_FILE=/root/.minio/certs/public.crt
MINIO_KEY_FILE=/root/.minio/certs/private.key
# MINIO_CERTS_DIR=/certs/caddy # was told .pem files could work but for now separating
MINIO_CERTS_DIR=/root/.minio/certs # either this or the CERT\KEY above is redundant...but it works for now.
# NOTE! if you change this port, change it in AWS_S3_ENDPOINT_URL as well
MINIO_PORT=9000
- Here is an example docker-compose.yml change for this:
#-----------------------------------------------
# Minio local storage helper
#-----------------------------------------------
minio:
image: minio/minio:RELEASE.2020-10-03T02-19-42Z
command: server /export --certs-dir /root/.minio/certs
volumes:
- ./var/minio:/export
- ./certs/minio:/root/.minio
restart: unless-stopped
ports:
- $MINIO_PORT:9000
env_file: .env
healthcheck:
test: ["CMD", "nc", "-z", "minio", "9000"]
interval: 5s
retries: 5
createbuckets:
image: minio/mc
depends_on:
minio:
condition: service_healthy
env_file: .env
# volumes:
# This volume is shared with `minio`, so `z` to share it
# - ./var/minio:/export
entrypoint: >
/bin/sh -c "
set -x;
if [ -n \"$MINIO_ACCESS_KEY\" ] && [ -n \"$MINIO_SECRET_KEY\" ] && [ -n \"$MINIO_PORT\" ]; then
until /usr/bin/mc config host add minio_docker https://minio:$MINIO_PORT $MINIO_ACCESS_KEY $MINIO_SECRET_KEY && break; do
echo '...waiting...' && sleep 5;
done;
/usr/bin/mc mb minio_docker/$AWS_STORAGE_BUCKET_NAME || echo 'Bucket $AWS_STORAGE_BUCKET_NAME already exists.';
/usr/bin/mc mb minio_docker/$AWS_STORAGE_PRIVATE_BUCKET_NAME || echo 'Bucket $AWS_STORAGE_PRIVATE_BUCKET_NAME already exists.';
/usr/bin/mc anonymous set download minio_docker/$AWS_STORAGE_BUCKET_NAME;
else
echo 'MINIO_ACCESS_KEY, MINIO_SECRET_KEY, or MINIO_PORT are not defined. Skipping buckets creation.';
fi;
exit 0;
"
Note: Don't forget to change the entrypoint to run https and not http.