Add new linked site - inab/openVRE GitHub Wiki
This guide explains how to register a new computational site in the openVRE MongoDB configuration.
Each site describes a computing infrastructure (local or remote) that can execute jobs through the Launcher component of the platform.
A computational site defines:
- the infrastructure metadata (name, ID, type, etc.)
- the launcher configuration, which describes how the VRE interacts with that resource (e.g. SSH, Docker, or Slurm)
Each entry is stored as a JSON object in the MongoDB collection (commonly infrastructures or computing_sites).
Below is the generic structure of a computational site entry:
{
"_id": "<unique_id>",
"name": "<Descriptive Name>",
"type": 2,
"launcher": {
"accessible_via": "<SSH | API | null>",
"job_manager": "<Slurm | SGE | docker_SGE | ...>",
"access_credentials": {
"username": "<optional>",
"private_key": "<optional>",
"public_key": "<optional>",
"server": "<hostname or IP>"
},
"partition": "<optional: cluster partition name>",
"cpu_count": <integer>,
"cpu_percent": <integer>,
"gpu_count": <integer>,
"memory": {
"virtual": <GB>,
"available": <GB>,
"total": <GB>
},
"outbound_connectivity": <true | false>,
"container": "<optional: Docker | Singularity>"
},
"sigla": "<short code>"
}This represents a local site running jobs through Docker containers managed by SGE or a similar scheduler - as defaults it is already in the MongoDB collection.
{
"_id": "local",
"name": "Local Docker Infrastructure",
"type": 2,
"launcher": {
"accessible_via": null,
"job_manager": "docker_SGE",
"container": "Docker"
},
"sigla": "LOC"
}The following entry represents a remote Slurm-based HPC cluster, accessible via SSH - as defaults it is already in the MongoDB collection.
{
"_id": "marenostrum",
"name": "MareNostrum HPC Infrastructure",
"type": 2,
"launcher": {
"accessible_via": "SSH",
"job_manager": "Slurm",
"access_credentials": {
"username": "your_username",
"private_key": "/path/to/private/key",
"public_key": "/path/to/public/key",
"server": "marenostrum.bsc.es"
},
"partition": "general",
"cpu_count": 4,
"cpu_percent": 100,
"gpu_count": 0,
"memory": {
"virtual": 16,
"available": 16,
"total": 16
},
"outbound_connectivity": true
},
"sigla": "MN"
}
Tip: Ensure that SSH key-based access is configured for the specified user. The job manager Slurm allows integration with sbatch, squeue, and similar commands.
To add your own infrastructure:
-
Update the JSON file with your new configuration (e.g.
/openVRE-core-dev/mongodb/init_documents/sites.json). -
Adapt one of the above examples to match your system parameters.
-
Insert it into MongoDB, for example using the Mongo shell or
mongosh:mongo use openvre db.sites.insertOne(<path_to_sites.json>)
or, if using mongosh and the file is stored locally:
mongosh "mongodb://localhost:27017/vre_db" --eval 'db.sites.insertOne($(cat sites.json))'
-
Restart the VRE backend so it can load the new configuration.
Here’s a ready-to-customize template:
{
"_id": "new_hpc",
"name": "New HPC Cluster",
"type": 2,
"launcher": {
"accessible_via": "SSH",
"job_manager": "Slurm",
"access_credentials": {
"username": "vre_user",
"private_key": "/path/to/id_rsa",
"public_key": "/path/to/id_rsa.pub",
"server": "hpc.example.org"
},
"partition": "compute",
"cpu_count": 16,
"cpu_percent": 100,
"gpu_count": 2,
"memory": {
"virtual": 64,
"available": 64,
"total": 64
},
"outbound_connectivity": true
},
"sigla": "new_HPC"
}