Deploying a service node - SeaDataCloud/Documentation GitHub Wiki

All services should run at each site. It is possible to deploy all of them on one huge server, or on several smaller servers. In the latter case, they should have fast access to some shared file system resource, e.g. NFS server or parallel filesystem.

Prerequisites

One large server:

  • Hardware specs: A lot :) [TODO specify]
  • Running Linux (note: CentOS 7 might be a good choice, because currently we need to install unison 2.40, and on debian the version that is installed by apt-get is 2,48)

Several smaller servers:

In both cases:

  • Install: docker, docker-compose

  • Ports that need to be accessible to the world (later: towards the proxy):

  • Port 8000 for incoming TCP connections (https), for DIVA

  • Port 8002, same, for ERDDAP

  • Port 8003, same, for visualisation

  • Port 8191, same, for ODV

  • .... [TODO Add ports for other services]

  • SSL certificates, not self-signed

Need for disk space

  • Space for user directories. B2DROP allows for 2 GB per user, so this is a first number for orientation. During test phase, we can go with less.
  • ODV Static Data: 3.8 GB and 2.9 GB
  • Probably static data from other services, but this is not clear yet.
  • Space for docker images, running containers, ...

Steps (to be done on every service node)

General

These are needed for any service node, whether you just just one service or all of them. (Not all services use synchronization yet, but in the end, they might all need it, so maybe already install and test it now).

  • Notify the operator of the master node (running the NextCloud) that this host should be in the trusted_domains, so that the services can login using WebDAV [TODO TEST: Port also needed, or domain enough?] [FUTURE TODO: Check if this is still necessary, once the login is no longer using NextCloud but the dashboard endpoint].
  • Login to the SeaDataCloud docker registry (see instructions for Master Node on how to do that / for the credentials, ask Themis/Merret/Sebastian)
docker login registry-sdc.argo.grnet.gr
  • Create the docker-network that all services will share: [TODO: Verify if they all need to run in the same, or is that requirement outdated??]
docker network create vre

Synchronization

See: https://github.com/SeaDataCloud/Documentation/wiki/syncer/

Steps (per service)

There may be separate machines for ODV, DIVA, etc. so in that case, only follow the steps of that service.

DIVA service (GHER-ULiège)

See https://github.com/SeaDataCloud/Documentation/wiki/Service:-DIVA

Fileselector service

See https://github.com/SeaDataCloud/Documentation/wiki/Service:-Fileselector

ODV New 2019-12-12

Started by Sebastian and Merret, 12.12.2019

  • Make directory for ODV and a subdirectory for mysql
  • Download the docker-compose.yml, the nginx config, and the data
  • Make some changes to docker-compose.yml
mkdir ODV
cd ODV

curl -O https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/services/odv/docker-compose.yml
curl -O https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/services/odv/nginx_proxy.conf
curl -O  https://webodv.awi.de/download/ODV.tgz
curl -O https://webodv.awi.de/download/odv_5.1.7beta_linux-amd64_ubuntu-18.04.tgz
curl -O https://webodv.awi.de/download/settings_webodv.tgz

tar xfz ODV.tgz
tar xfz odv_5.1.7beta_linux-amd64_ubuntu-18.04.tgz
tar xfz settings_webodv.tgz

# Make empty dir for mysql data:
mkdir mysql_data_persistent

Changes in docker-compose.yml:

  • Change the path /path/to/your/sync/target to where your synchronizer sends the Nextcloud data to (sync target, defined during synchronizer setup - we suggested to use /srv/seadata/vre/sync so that's likely your candidate).
  • Change the setting FORCE_ROOT_URL to your machine's FQDN
  • Change the MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER and MYSQL_PASSWORD (ask Sebastian or Merret for values)
  • Set the same values in the database's healthcheck line (test: "/usr/bin/mysql --user=XXXXX --password=XXXXX --silent --execute \"SELECT 1;\"", use the same values as MYSQL_USER and MYSQL_PASSWORD)
  • Change the DASHBOARD_URL to your dashboard's FQDN (and port)
  • Change path to SSL certs so that they point to your SSL certs (and make sure that your private key has permission 400)
  • Change path to nginx health check so that they point to your nginx healthcheck file.

Note: On sdc-test.argo.grnet.gr, we have called the db_odv container db, for test reasons, and because on this machine, there is no other container of that name in the vre docker-network. On orca, or all machines that have dashboard and odv on the same machine, it must be called odv_dc to avoid name conflicts.

ODV service (AWI)

Static data

  • Choose a directory that will contain the static data needed by ODV (say, /path/to/static_data/data_for_ODV/). This may be inside the ODV directory or anywhere on the filesystem.

  • Download the static data into it. There are two batches:

    • Extractor data (will be bind-mounted to /var/www/html/ODV_Data, read-permission is enough): Ask Sebastian for the data.
    • Basemaps data / bathymetry (will be bind-mounted to /root/ODV with read+write permissions): Ask Sebastian for the data. (1.6 GB file)
  • Unzip the ODV.tgz (then it is 2.9 GB) and chown them to root:root

  • tbd: Extractor data! [TODO] Unzip (then it is 3.8 GB), and probably chown...

mkdir -p /path/to/static_data/data_for_ODV/

# basemaps:
mkdir /path/to/static_data/data_for_ODV/basemaps
cd /path/to/static_data/data_for_ODV/basemaps
wget <tbd>
tar xvzf ODV.tgz
chown -R root:root ./ODV
cd ..

# extractor data
mkdir /path/to/static_data/data_for_ODV/extractor
cd /path/to/static_data/data_for_ODV/extractor
# wget <tbd>
# Probably the same: Untar and chown...
# ...
# ...

Actual ODV service

Changes to the docker-compose.yml

  • Change MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER and MYSQL_PASSWORD (ask Sebastian or Merret for creds).

  • Add the same values in the line test: "/usr/bin/mysql --user=XXXXX--password=XXXXX--silent --execute \"SELECT 1;\"".

  • Mount the mysql init script: It has to be (unzipped and) available inside the container as /docker-entrypoint-initdb.d. If the unzipped script is named docker-entrypoint-initdb.d and sits in the subdirectory mysql_resources of the service's home directory, there is no need to change anything. Otherwise, adapt the line - ./mysql_resources/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d so that the left path points to the unzipped script.

  • Change FORCE_ROOT_URL to your server's FQDN (without https or http, as that will be added by the FORCE_SCHEME setting).

  • Bind-Mount the static data into the container:

    • Replace /path/to/static_data/data_for_ODV/basemaps/ODV by your path
    • Replace /path/to/static_data/data_for_ODV/extractor/ODV_Data by your path
  • Bind-Mount the SSL certs into the container... TODO

  • Bind-mount the local sync target into the container

Did it work?

Try <myhostname>:8191 (from any machine) with curl or browser and check if HTML comes back.

If not, here's a few things you might try to find out more about where your problem lies.

Health

Check health of the containers: docker ps -a. Are they all healthy? (vreodv_webodv_1, vreodv_db_odv_1, vreodv_nginx_1 ?)

ODV or proxy problem?

Check if ODV is serving its content:

# on the host:
docker exec -it odv_webodv_1 /bin/bash
# Now you are inside the container's shell, now run:
curl localhost:80 # this should give you some HTML:  <div class="col-sm-4 jumbotron"><h1>Virtual Research Environment</h1></div> ...

If HTML comes through, then probably the proxy is the error.

Check for SSL problems

  • In docker-compose.yml: change 8191:443 to 8191:80
  • In nginx_proxy.conf: Change listen 443 ssl; to listen 80;
  • Restart the containers (docker-compose stop && docker-compose start should be enough, otherwise try docker-compose down && docker-compose up). Can you now get HTML from http://:80 ?

If you can see HTML content now, something is wrong with SSL. Check the permissions of your public and private key. Check inside the container (docker exec -it vreodv_nginx_1 /bin/bash) whether they are at the expected locations (/etc/ssl/certs/myhost.crt and /etc/ssl/private/myhost.key).

Check if nginx serves content internally

Let's check if you can get the content on localhost, inside the nginx container:

docker exec -it vreodv_nginx_1 /bin/bash
# Now you are inside the container's shell, now run:
apt-get update && apt-get install -y curl
curl localhost:80 # if you switched to plain http
# Does this return HTML content?
exit # to exit container's shell

If you can see the HTML content there, but not outside the container, it might be some networking / DNS / iptables problems. They might not be related to the VRE services, but to your environment. They are very hard to debug from outside, so please try to find someone knowledgeable at your organisation. If you solve your problem, feel free to indicate in this wiki, so other deployers may profit!

How to check if it was ok

Without logging in/without the NextCloud connected, these services have no access to the user directory (to store data in), so they are not really functional. [TODO: True?]

[TODO: Do I need to be logged in for that?]

ERDDAP subsetting service (IFREMER)

See: https://github.com/SeaDataCloud/Documentation/wiki/Service:-ERDDAP-Subsetting-Service

BioQC service (VLIZ)

See: https://github.com/SeaDataCloud/Documentation/wiki/Service:-VLIZ-BioQC-Service

Visualisation service (Deltares)

See: https://github.com/SeaDataCloud/Documentation/wiki/Service:-Deltares-Visualisation-Service

⚠️ **GitHub.com Fallback** ⚠️