getting started - noma/dm-heom GitHub Wiki

Getting Started

Download

Clone the repository:

git clone --recursive https://github.com/noma/dm-heom

Faster alternative for Git >=2.8:

git clone --recursive -j8 https://github.com/noma/dm-heom

Dependencies

Required:

  • a C++11 compiler, e.g. GCC >= 4.9.0
    • 4.8.x has incomplete regex support!
    • Clang >= 3.3.0 should work too (3.9.0 is tested)
    • Intel >= 16.0.0 should work too (17.0.1 is tested)
    • Clang and Intel use the GCC standard library on most systems and thus still need a GCC >= 4.9.0
  • CMake >= 2.8.12
  • Boost >= 1.60 is tested, older probably works too
    • Filesystem, Format, Functional, Lexical Cast, Program Options, Math, UBLAS
    • Boost is available through the package system of most distributions, make sure to install the developer packages too.
    • Needs to be built with the used compiler for ABI compatibility.
  • OpenCL >= 1.2 (We currently only use what's specified within 1.2 for portability reasons.)

For distributed runs:

  • an MPI >= 3.0 implementation
    • MPI_Neighbor_alltoallw() is the performance critical part

For production runs:

  • METIS for creating graph partitionings

In older environments, a newer C++ Compiler, CMake, or Boost can be build and installed inside the user's home directory.

Make sure an OpenCL SDK (Intel, Nvidia, AMD, PoCL, ...) is installed (e.g. check if /etc/OpenCL/vendorsexists and is non-empty). It is also possible to install an SDK in your home directory, although you cannot change the system's /etc/OpenCL/vendors path. Doing so requires either an OpenCL ICD Loader that supports the OPENCL_VENDOR_PATH variable, or to link directly to the provided OpenCL driver, see the OpenCL section in Troubleshooting.

Building

DM-HEOM uses CMake. It allows multiple builds to reside in different sub-directories that can have arbitrary names and locations, e.g. build, build.debug, or build.release_clang. The default would be to just create build within the repository's cpp folder. The .. at the end of the cmake command refers to the path of the CMakeLists.txt to use. **Read to the end before pasting anything to the terminal. The actual CMake command line depends on the system environment, we start with describing the options.

A simple release build (Release is set as the default build-type):

mkdir build
cd build
cmake ../dm-heom

For a development build, set the build type to Debug. This will enable plenty of output during application runtime and is not recommended for production runs.

cmake -DCMAKE_BUILD_TYPE=Debug ../dm-heom

For including the distributed MPI executables add -DHEOM_ENABLE_MPI=True to the cmake command line, e.g.

cmake -DHEOM_ENABLE_MPI=TRUE ../dm-heom

The MPI executables are prefixed with app_mpi, e.g

app_mpi_population_dynamics

Here are some additional variants to call CMake, that can also be combined with each other and the ones above.

Using a custom compiler with CMake, e.g. Clang, or to make sure a non-system compiler from the PATH environment variable is actually used:

CXX=`which g++` CC=`which gcc` cmake ../dm-heom
CXX=`which clang++` CC=`which clang` cmake ../dm-heom
CXX=`which icpc` CC=`which icc` cmake ../dm-heom
CXX=`which CC` CC=`which cc` cmake ../dm-heom

A typical CMake command line with activated MPI support using a custom GCC from the environment could look like this:

CXX=`which g++` CC=`which gcc` cmake -DHEOM_ENABLE_MPI=TRUE ../dm-heom

Depending on the installed OpenCL implementation, additional options might be required, see Troubleshooting.

If CMake ran without errors, it created a Makefile project within the build directory. Now we simply can use make to build everything (-j performs a parallel build, -j 4 would limit it to 4 parallel build processes):

make -j

To perform a selective build just name one or more executable:

make -j app_mpi_population_dynamics
make -j app_population_dynamics
make -j app_linear_absorption

To get more output including the generated compiler and linker commands, add VERBOSE=1 to the make call:

make VERBOSE=1

Running

First, you need to decide which OpenCL Platform and Device on the target machine should be used. The ocl_info tool within the build folder provides an enumerated list of OpenCL Platforms and their Devices (its created automatically when you run 'make -j' in the build folder):

build/ocl_info

You can run it by calling

./ocl_info

from the build directory.

Everything related to the way how DM-HEOM computes the results using OpenCL is configured in a config file. This configuration is system-dependant, thus it's kept as a separate file. A simple default configuration is provided in data/ocl_config.cfg. If your platform and device index are both 0, then you can use directly. Otherwise make a copy and adapt set the device and platform index according to your needs.

For OpenCL kernel development, the OpenCL configuration file can optionally contain paths to kernel source files (see data/ocl_config_devel.cfg), which are loaded at runtime. This will override using the embedded kernel_sources generated by CMake from the *.cl files. This allows quick development cycles without having to recompile the application. If using that option, relative paths are relative to the working directory of the application, i.e. the folder it was started in. If no files are specified, the source code embedded in the executable is used. This source is generated using CMake scripting from the kernel sources inside the cl directory.

Every application takes at least two configuration files as arguments. The first one is always the OpenCL configuration file, the second one the input configuration file specifying the physical experiment simulated by the app, as well as numerical parameters and application options. The MPI applications take an optional graph partitioning as a third argument, which can be generated by external tools like METIS. If not specified a rather simple, internal heuristic is used to partition the graph. For now, METIS is the recommend way of generating partitionings.

Here are some examples of how to run:

cd build

# simple population dynamics
./app_populations_dynamics ../data/ocl_config.cfg ../data/example_fmo/fmo_population_dynamics_77K.cfg

# distributed population dynamics using MPI and internal partitioning
mpirun -n 4 ./app_mpi_populations_dynamics ../data/ocl_config.cfg ../data/example_fmo/fmo_population_dynamics_77K.cfg

# distributed population dynamics using MPI with a METIS generated paritioning
mpirun -n 4 ./app_mpi_populations_dynamics ../data/ocl_config.cfg ../data/example_fmo/fmo_population_dynamics_77K.cfg fmo_population_dynamics_77K.cfg.graph.part.4

See apps and scripts for a list and description of all available executables.