Advanced compilation with MPI - geomechanics/mpm GitHub Wiki

Additional packages

PETSc installation for parallel linear solver

In order to compile PETSc and run it concurrently in our MPM code, the following step-by-step commands are suggested. Note that, PETSc is only needed to run semi-implicit and fully implicit solvers. One can run the explicit solvers without installing PETSc.

Removing some previously installed libraries

First, since it is required to use the same version of OpenMPI to compile PETSc and MPM, we have to first uninstall some pre-installed libraries.

sudo apt remove libboost-all-dev			
sudo apt remove libopenmpi-dev			

If some files of OpenMPI remained, remove all files manually.

Reinstall OpenMPI

Then, we should re-install OpenMPI as:

  1. Download "openmpi-4.1.5.tar.gz" from the official site (https://www.open-mpi.org/software/ompi/v4.1/
  2. Expand the compressed files
tar -zxvf openmpi-4.1.5.tar.gz			
  1. Build OpenMPI and install (Installed to $HOME/opt/openmpi).
cd openmpi-4.1.5
./configure --prefix=/usr/local/openmpi-4.1.5 CC=gcc CXX=g++ FC=gfortran
make all
sudo make install
  1. Add the following lines to ~/.bashrc.
MPIROOT=/usr/local/openmpi-4.1.5
PATH=$MPIROOT/bin:$PATH
LD_LIBRARY_PATH=$MPIROOT/lib:$LD_LIBRARY_PATH
MANPATH=$MPIROOT/share/man:$MANPATH
export MPIROOT PATH LD_LIBRARY_PATH MANPATH
  1. Reload bash.
source ~/.bashrc
  1. Check installation.
mpicc -v

Install PETSc

  1. Download PETSc using git.
git clone https://gitlab.com/petsc/petsc.git petsc
  1. Configure PETSc installation.
cd petsc
./configure --with-mpi-dir=/usr/local/openmpi-4.1.5 --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-fblaslapack=1
  1. Build and check PETSc.
make PETSC_DIR=/home/user/workspace/petsc PETSC_ARCH=arch-linux-c-opt all
make PETSC_DIR=/home/user/workspace/petsc PETSC_ARCH=arch-linux-c-opt check

Reinstall boost

sudo apt install libboost-all-dev

KaHIP installation for domain decomposition

git clone https://github.com/kahip/kahip && cd kahip
sh ./compile_withcmake.sh

Compile with MPI

The Geomechanics MPM code can be compiled with MPI to distribute the workload across compute nodes in a cluster.

Additional steps to load OpenMPI on Fedora:

source /etc/profile.d/modules.sh
export MODULEPATH=$MODULEPATH:/usr/share/modulefiles
module load mpi/openmpi-x86_64

Compile with OpenMPI (with halo exchange):

mkdir build && cd build 
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ -DHALO_EXCHANGE=On ..
make -jN

Compile with OpenMPI (with halo exchange and PETSc):

mkdir build && cd build 
export PETSC_ARCH=arch-linux-c-opt
export PETSC_DIR=/workspace/petsc
export CXX_COMPILER=mpicxx
cmake -DCMAKE_BUILD_TYPE=Release -DKAHIP_ROOT=~/workspace/KaHIP/ -DHALO_EXCHANGE=On -DUSE_PETSC=On ..
make -jN

To enable halo exchange set -DHALO_EXCHANGE=On in CMake. Halo exchange is a better MPI communication protocol, however, use this only for a larger number of MPI tasks (e.g. > 4).