Service: Dashboard and Workspace - SeaDataCloud/Documentation GitHub Wiki

Dashboard and Nextcloud

General

Home directory

Dashboard and NextCloud will run in a common docker-compose.yml, which must be adapted with the local settings. The modifications are listed in the steps below. Some more scripts and things must be installed and configured.

mkdir /opt/dashboard # in Oxford
cd /opt/dashboard    # in Oxford
#mkdir /root/vrehome # in Athens
#cd /root/vrehome    # in Athens

wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/dashboard/docker-compose.yml

Health checks

Make one directory somewhere for all healthcheck script that may be used by more than one service. Into it, add the nginx healthcheck (and potentially other healthcheck files we come along while deployment).

mkdir /opt/healthchecks/
cd /opt/healthchecks/
wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/healthchecks_for_containers/healthcheck_nginx.sh
wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/healthchecks_for_containers/healthcheck_jupyterhub.sh
wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/healthchecks_for_containers/healthcheck_filebeat.sh

Dashboard

User database (mysql)

The dashboard uses mysql for the user database. It is included in the docker-compose, so you do not need to install mysql.

You can choose the location where mysql will store the data. You can store it in the /opt/dashboard/mysql_data, but in some cases this partition does not have much disk, so you can choose a different location in the filesystem. The same goes for the backup directory.

  • Create a directory that will be used by mysql to store the data, called mysql_data.
  • TEST: Maybe this needs to be chowned to 27:27, but I am not sure!
  • Create a backup directory called mysql_backups. (This is where the mysql-dumper service, which is also in the docker-compose file, will store its SQL dumps).
  • Make sure this backup directory is properly backed up!!! --> See wiki page by Kostas: https://github.com/SeaDataCloud/Documentation/wiki/Backup
# create dir for data:
mkdir /opt/dashboard/mysql_data   # or wherever you have space
#mkdir /root/vrehome/mysql_data   # in Athens

# + change permissions ???

# create dir for dumps:
mkdir /opt/dashboard/mysql_backups                   # or wherever you have space
#mkdir /root/vrehome/mysql_backups/dashboard_mysql   # in Athens

# + add backup mechanism for the sql dumps directory

The docker-compose and some other resouces:

  • Create a directory in dashboard called mysql_resources.
  • Download and unpack the mysql init script (by AWI) for the dashboard user database: Download on request, ask Sebastian or Merret (note to self: .../vredash/mysql_resources/docker-entrypoint-initdb.d_dashboard.tgz)
mkdir /opt/dashboard/mysql_resources # at STFC
cd  /opt/dashboard/mysql_resources   # at STFC
#mkdir /root/vrehome/mysql_resources     # at GRNET
#cd  /root/vrehome/mysql_resources       # at GRNET

wget <docker-entrypoint-initdb.d>

# permissions?

Change in the "mysql" and "dumper" section of docker-compose.yml:

  • Bind-mount the mysql_resources directory to /var/lib/mysql/ (in mysql db service) (as read-write)
  • Bind-mount your backup directory mysql_backups to /srv/mysql/backup (in dumper service) (as read-write)
  • Change mysql variables (MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD) in both services (ask Sebastian or Merret for values)
  • You could adapt the backup frequency (by adding a cronjob schedule under CRON_SCHEDULE, and then adapting the maxiumum number of seconds that may occur between two backups via MAX_SECONDS_SINCE_LAST), but you can leave it at the default value, which is hourly backups. They are kept for a week so this risks to occupy a bit too much space.

Dashboard itself

Changes in the "dashboard" section of docker-compose.yml:

  • Bind-mount the ssh keys into the dashboard container in docker-compose.yml
  • (Any permission change needed for the SSL certs???)
  • Adapt env variable UNITY_REDIRECT=manatee.org/vre_login/callback (FQDN of this server + /vre_login/callback)
  • Adapt the env variable FORCE_ROOT_URL=manatee.org (FQDM of this server)
  • Adapt the env variable syncer=http://syncer:5000/sync/bidirectional/manatee/service_host/ [TODO]

Authentication

  • Register the client as an OAuth client: Sebastian does this, please ask him!

NextCloud

NextCloud and NFS

You can choose the location where NextCloud will store the users' data. You can store it in the /opt/dashboard/nextcloud_data, but in some cases this partition does not have much disk, so you can choose a different location in the filesystem.

The directory has to be exposed as NFS or a similar way of letting other VMs in this data centre access the data.

  • Make a directory nextcloud_data (or something similar denoting 'DATA') where NextCloud stores all its persistent user data. It should be owned by 33:33 and will be mounted to the container.
  • This directory should be exposed for services running in the same data centre, but on different VMs, i.e. as NFS (or on some shared filesystem, or whatever. In Athens, it is an NFS server!)
  • Make sure this directory is properly backed up!!! --> See wiki page by Kostas: https://github.com/SeaDataCloud/Documentation/wiki/Backup
  • Also, create a directory for other NextCloud things, called /opt/dashboard/nextcloud.
# for user data
#mkdir /mnt/sdc-data  # in Oxford
#mkdir /nfs-export                        # in Athens
chown 33:33 /mnt/sdc-data

# for settings:
#mkdir /opt/dashboard/nextcloud # in Oxford
#mkdir /root/vrehome/nextcloud               # in Athens
# permissions?

Changes in the "workspace" section of docker-compose.yml:

  • Add this server's FQDN to trusted_domains - as well as the FQDN of all the processing servers, which will use NextCloud for WebDAV authentication (until the dashboard has that endpoint)
  • Set overwritehost to this server's FQDN
  • Bind-mount the data directory (/mnt/sdc-data) to /var/www/html/data:rw
  • Bind-mount the other directory (./nextcloud) to /var/www/html:rw

Proxy

Future: There will be just one proxy

The future is now!

Create a directory for the reverse proxy and add the docker compose, the conf, and a favicon.

mkdir /opt/revproxy
cd /opt/revproxy
#mkdir /root/revproxy # in Athens
#cd /root/revproxy    # in Athens

wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/revproxy/docker-compose.yml
wget https://github.com/SeaDataCloud/vre-config/raw/master/revproxy/favicon-32x32.png
wget https://raw.githubusercontent.com/SeaDataCloud/vre-config/master/revproxy/nginx.conf

In the docker-compose, make sure it points to the right locations for the SSL certificates and the nginx healthcheck.

In the config, adapt the line server_name sdc-test.argo.grnet.gr; to point to your own FQDN, and all the other occurrences of that URL. There is quite a few of them.

Now is the time to run it!!!

docker-compose down && docker-compose up -d && docker-compose logs --tail=100 -f

How to test the setup

From a user perspective:

  • Your server should show the VRE login page on port 443 (and forward from 80 to 443)
  • Login using Marine-ID should be possible
  • Clicking the Private Workspace should be possible. Viewing the pre-installed data should be possible, and uploading/downloading data from there.
  • The sync button in the private workspace should work and not throw errors - but it is currently not possible to check if the sync worked unless services are deployed.

If you have access to the server:

  • Of course, the docker-compose logs should tell you whether anything fatal has happened.
  • Check if the local sync worked by adapting/uploading a file into the NextCloud and syncing it, then you can see whether it was properly synced to the local sync target folder. You can also artificially create conflicts between NextCloud and the sync target and test the conflict resolution.
  • Make sure to check the backup cronjob for the nextcloud data - very important not to lose our user's data!
  • Make sure you also check the mysql backup - if the user database is corrupt, we're screwed...