Running the code in development environment - GeospatialResearch/Digital-Twins GitHub Wiki

Developing, building, and running locally

To develop the code, and make use of IDE tools such as debuggers, its recommended to first follow the instructions in the README for setting up your .env files and for learning how to run the application using Docker.

We recommend PyCharm as an IDE, although others can be used we will not be able to give as much support.

Requirements:

  • Docker
  • Anaconda or Miniconda
  • Node.js/NPM
  • PyCharm IDE

Run visualisation web server locally:

Instructions to build and run the visualisation locally can be found at visualisation/README.md. You must have the other services other than www running to have full functionality. These can be running on docker or the host machine. The easiest way to run these is by using the command docker compose up -d celery_worker backend

Currently, the visualisation is tightly coupled with specific versions of visualisation library geo-visualisation-components and this problem is being worked on.

Run Backend and Celery Workers locally:

  1. Start by installing the conda environment for python dependencies:
    #!/usr/bin/env bash
    # Mamba is much faster than conda for solving complex dependency requirements
    conda install mamba
    mamba env create -f environment.yml
    mamba activate digitaltwin
    
  2. Download and unzip the BG Flood Windows executable
    1. If you are using Linux, you will need to build the binary from source, follow the instructions at the BG Flood repo.
  3. Set the FLOOD_MODEL_DIR variable in your .env file to point to the unzipped BG Flood directory.
  4. Set the DATA_DIR variables in your .env file to point to a new directory that you want to store all the Digital Twin data files in.
  5. Start necessary supplementary services in developer mode (allowing bind mounting) with docker compose -f docker-compose.yml -f docker-compose-dev.yml up -d message_broker geoserver www db_postgres.
  6. You can run the backend flask service locally with the following command, using your conda environment:
    flask --app src.app run
    
  7. You can start the celery workers with the following command, using your conda environment:
    celery -A src.tasks worker -P threads --loglevel=INFO
    

Running processing as a script / debugging:

For quicker development and access to debuggers, it may be necessary to run the code directly in your IDE.

If you use Windows, then the data volume for the database must be separate from the one you use for running processing in Docker, since the filesystem has a different structure.

Follow these steps:

  1. Follow steps 1-5 in the previous section.
  2. Add this environment as your interpreter in your IDE.
  3. The command to run all processing on a predefined polygon is python -m src.run_all, in PyCharm this can be set up by selecting a run configuration for src/run_all.py with working directory set as the root directory Digtial-Twins/.