Basic Usage instructions - rosepearson/GeoFabrics GitHub Wiki

GeoFabrics is designed to be used in one of three ways. Either by:

  • Installing the package and using the command line interface (CLI) geofabrics_from_file or geofabrics_from_dict from the command line
  • Using the main entry point defined in the __main__ module by calling geofabrics(instructions="path_to_your_instructions_file") in Python
  • Importing the package then using the processor or runner module classes directly

If you haven't yet installed GeoFabrics, check out Package Install Instructions.

If you have cloned the repository, you can also run GeoFabrics using the entry point defined in the __main__ module by calling the following in the repository src folder: python -m geofabrics --instruction path_to_your_instructions_file

Notes on CLI & entry points

The GeoFabrics CLI / entry points are designed to run all the relevant DEM generation pipeline steps for a given instruction file to produce a DEM. The instruction file is checked for the top-level key-words rivers, waterways, dem and roughness in that order. These lead to the following processor classes to be called in turn:

  • If the rivers keyword is present the RiverBathymetryGenerator processor class is called.
  • If the waterways keyword is present the DrainBathymetryGenerator processor class is called.
  • If the dem keyword is present the RawLidarDemGenerator, and HydrologicDemGenerator processor classes are called.
  • If the roughness keyword is present the RoughnessGenerator processor class is called.

More information about the behaviour of each processing task can be found under Package structure.

Command Line Interface

If you have installed geofabrics in a virtual environment you can use two CLI commands in the virtual environment command line (i.e. conda):

geofabrics_from_file --instructions full\path\to\instruction.json

or

geofabrics_from_dict --instructions your_instructions_dictionary

Package entry point

The Geofabrics CLI entry points are contained in the main module. The main one can be accessed from a virtual environment with GeoFabris installed by:

python -m geofabrics --instructions full\path\to\instruction.json

The same command can also be used in the root folder of a locally cloned repository.

Importing GeoFabrics

Once installed, the GeoFabrics processor or runner modules can be directly imported and used. A basic code stub in a Python interpreter looks like:

from geofabrics import processor
import json
with open(r'path\to\file.json', 'r') as file_pointer:
            instructions = json.load(file_pointer)
runner = processor.RawLidarDemGenerator(instructions)
runner.run()

Information about accepted instruction file key-words can be found in the Wiki under Instruction file contents.

Entry point scripts

Alternatively, two scripts, main.py and benchmarking.py, are provided in the src folder to facilitate using the GeoFabrics package as a stand alone tool. These can be run in the conda environment defined in the root\environment_linux.yml or root\environment_window.yml with the following (substitute benchmarking.py for main.py):

python src\main.py --instructions full\path\to\instruction.json
  • main.py - this runs the relevant DEM generation pipeline class(es) given an instruction file to produce a DEM. The instruction file is checked for the top-level key-words rivers, drains, dem and roughness in that order. These lead to the following processor classes to be called in turn:
    • If the rivers keyword is present the RiverBathymetryGenerator processor class is called.
    • If the drains keyword is present the DrainBathymetryGenerator processor class is called.
    • If the dem keyword is present the RawLidarDemGenerator, and HydrologicDemGenerator processor classes are called.
    • If the roughness keyword is present the RoughnessGenerator processor class is called.

More information about the behaviour of each processing task can be found under Package structure.

  • benchmarking.py - this runs the RawLidarDemGenerator class over a data-set (best to make it over a small subset of the catchment) for a range of chunk_sizes and numbers_of_cores at a given resolution, so that the best combination from a performance perspective can be selected for the full-sized catchment. A plot showing the execution time of each combination is produced. More details can be found at the Performance and Benchmarking wiki page.