Compiling FDS with GNU Fortran on x86 macOS - firemodels/fds GitHub Wiki

This tutorial should help the advanced users who are interested in compiling the FDS source code using the latest GNU Fortran distribution via Homebrew, Open MPI from source, and linking to the Intel Performance Libraries in macOS. The following instructions were developed for an x86 hardware system. If you are looking for compiling guidance for Apple chips please go here.

Installing Homebrew

Navigate to the Homebrew website https://brew.sh and follow install instructions.

Installing GNU Fortran using Homebrew

In your command window type:

$ brew update
$ brew install gcc

Installing OpenMPI from source:

If you have openmpi installed through Homebrew you will need to uninstall it (incompatible with the MKL-MPI library wrapper compilation described here). Download the compressed OpenMPI source (for me stable release openmpi-4.0.2.tar.gz) from https://www.open-mpi.org/software/ompi/v4.0, then place it on a compilation directory of your choice (I have a Software directory in ~/Documents) and type:

$ tar -xvf openmpi-4.0.2.tar.gz
$ cd openmpi-4.0.2

In the openmpi directory configure the installation:

$ ./configure FC=gfortran CC=gcc --enable-mpi1-compatibility --prefix=/usr/local/openmpi4.0gcc --enable-mpirun-prefix-by-default --enable-mpi-fortran --enable-static --disable-shared --with-hwloc=internal --with-libevent=internal

Then make and install:

$ make -j 2
$ sudo make install

Your mpifort and mpirun executables will be located in /usr/local/openmpi4.0gcc/bin, and the associated libraries to link FDS to will be located in /usr/local/openmpi4.0gcc/lib. Make these directories available to your environment by adding to your ~/.zprofile (if using zsh) or ~/.bash_profile (if using bash) startup file:

# MPI Library:
export MPIDIST=/usr/local/openmpi4.0gcc
export PATH=$MPIDIST/bin:$PATH
export LD_LIBRARY_PATH=$MPIDIST/lib:$LD_LIBRARY_PATH

Finally, either log out and log in, or open a new terminal to apply your environment changes. Check you are locating these executables:

$ which mpirun
/usr/local/openmpi4.0gcc/bin/mpirun

This should show where your open-mpi executables (mpifort, mpicc compiler wrappers, mpirun, etc.) are located. Note that if you do not see /usr/local/openmpi4.0gcc/bin/mpirun installed from source, you should check that your .bash_profile is not already pointing to a different MPI distribution, possibly from an FDS bundle installation.

Installing Intel Math Kernel Library

Follow the instructions provided on this wiki for macOS users.

Compiling Sundials (optional)

To run finite-rate chemistry using the CVODE solver you need to install Sundials from Lawrence Livermore National Laboratory.

Step 1: Brew install LLVM.

$ brew install llvm

Step 2: Download the sundials-6.7.0.tar.gz file (currently this is the only version that is supported in FDS) and unzip it in a local directory in your user account, <path-to-sundials>.

Step 3: Run Cmake on Sundials.

$ cd <path-to-sundials>/sundials-6.7.0
$ mkdir BUILDDIR
$ mkdir INSTDIR
$ cd BUILDDIR

Next, copy the following to a text file and replace <path-to-sundials> with the full path of the Sundials parent directory. Then copy it back to your terminal and hit enter.

$ cmake ../ \
-DCMAKE_INSTALL_PREFIX=<path-to-sundials>/sundials-6.7.0/INSTDIR \
-DEXAMPLES_INSTALL_PATH=<path-to-sundials>/sundials-6.7.0/INSTDIR/examples \
-DCMAKE_C_COMPILER=/opt/homebrew/opt/llvm/bin/clang \
-DCMAKE_CXX_COMPILER=/opt/homebrew/opt/llvm/bin/clang++ \
-DBUILD_FORTRAN_MODULE_INTERFACE=ON \
-DEXAMPLES_ENABLE_CXX=ON \
-DEXAMPLES_ENABLE_CUDA=OFF \
-DEXAMPLES_ENABLE_F2003=ON \
-DENABLE_OPENMP=ON

Finally, make and make install.

$ make
$ make install

Step 4: Set Sundials environment variables in your ~/.bash_profile.

# Sundials:
export SUNDIALS_HOME=<path-to-sundials>/sundials-6.7.0/INSTDIR
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path-to-sundials>/sundials-6.7.0/INSTDIR/lib

Compiling

It is advisable to open a new terminal to ensure your $PATH is clean.

Make sure that your mpirun command is the one installed previously in /usr/local/openmpi4.0gcc/bin/mpirun.

$ which mpirun
/usr/local/openmpi4.0gcc/bin/mpirun

Also, make sure mpifort points to gfortran installed via Homebrew.

$ mpifort -v
Using built-in specs.
COLLECT_GCC=/usr/local/bin/gfortran
COLLECT_LTO_WRAPPER=/usr/local/Cellar/gcc/9.2.0_1/libexec/gcc/x86_64-apple-darwin18/9.2.0/lto-wrapper
Target: x86_64-apple-darwin18
Configured with: ../configure --build=x86_64-apple-darwin18 --prefix=/usr/local/Cellar/gcc/9.2.0_1 --libdir=/usr/local/Cellar/gcc/9.2.0_1/lib/gcc/9 --disable-nls --enable-checking=release --enable-languages=c,c++,objc,obj-c++,fortran --program-suffix=-9 --with-gmp=/usr/local/opt/gmp --with-mpfr=/usr/local/opt/mpfr --with-mpc=/usr/local/opt/libmpc --with-isl=/usr/local/opt/isl --with-system-zlib --with-pkgversion='Homebrew GCC 9.2.0_1' --with-bugurl=https://github.com/Homebrew/homebrew-core/issues --disable-multilib --with-native-system-header-dir=/usr/include --with-sysroot=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
Thread model: posix
gcc version 9.2.0 (Homebrew GCC 9.2.0_1) 

Next, go to your updated repo for FDS. You should be able to build the mpi_gnu_osx_64 and mpi_gnu_osx_64_db targets by typing, for example,

$ cd Build/mpi_gnu_osx_64_db
$ ./make_fds.sh

Check that the preprocessor variable -DWITH_MKL is being passed to the compiler in both compile and link phases. For example, for the first file being compiled (prec.f90):

mpifort -c -m64 -O0 -std=f2008 -ggdb -Wall -Wcharacter-truncation -Wno-target-lifetime -fcheck=all -fbacktrace 
-ffpe-trap=invalid,zero,overflow -ffpe-summary=none -fall-intrinsics -cpp -DGITHASH_PP=\"FDS6.7.3-396-g6d163be33-dirty-master\" 
-DGITDATE_PP=\""Tue Jan 21 15:46:34 2020 -0500\"" -DBUILDDATE_PP=\""Jan 28, 2020  15:16:42\"" -DCOMPVER_PP=\""Gnu gfortran GCC"\" 
-DWITH_MKL -I/opt/intel18/compilers_and_libraries_2018.2.164/mac/mkl/include ../../Source/prec.f90

Running the Code

If you have multiple meshes, it is most efficient to use MPI to run the job in parallel. First, determine how many cores are available: Click the Apple icon in the upper left of your Desktop and go to About This Mac. Then click on the System Report.... My iMac has 4 cores. So, that is what I will use in my example below. I'm going to test the code on an example with 4 meshes. Cd to the fds/Verification/WRF/ directory. We will run the case wrf_time_ramp.fds.

$ mpirun -n 4 /Users/mnv/Documents/FIREMODELS_FORK/fds/Build/mpi_gnu_osx_64/fds_mpi_gnu_osx_64 wrf_time_ramp.fds 

 Starting FDS ...

 MPI Process      0 started on bevo.el.nist.gov
 MPI Process      1 started on bevo.el.nist.gov
 MPI Process      2 started on bevo.el.nist.gov
 MPI Process      3 started on bevo.el.nist.gov

 Reading FDS input file ...

 
 Fire Dynamics Simulator

 Current Date     : January 28, 2020  17:00:10
 Revision         : FDS6.7.3-396-g6d163be33-dirty-master
 Revision Date    : Tue Jan 21 15:46:34 2020 -0500
 Compiler         : Gnu gfortran GCC
 Compilation Date : Jan 28, 2020  16:13:43

 MPI Enabled;    Number of MPI Processes:       4
 OpenMP Disabled

 MPI version: 3.1
 MPI library version: Open MPI v4.0.2, package: Open MPI [email protected] Distribution, ident: 4.0.2, repo rev: v4.0.2, Oct 07, 2019

 Job TITLE        : Test of mean velocity ramps for one-way WRF coupling with FDS
 Job ID string    : wrf_time_ramp

 Time Step:      1, Simulation Time:      0.46 s
 Time Step:      2, Simulation Time:      0.93 s
 Time Step:      3, Simulation Time:      1.39 s
 Time Step:      4, Simulation Time:      1.86 s
 Time Step:      5, Simulation Time:      2.32 s
 Time Step:      6, Simulation Time:      2.78 s
 Time Step:      7, Simulation Time:      3.25 s
 Time Step:      8, Simulation Time:      3.71 s
 Time Step:      9, Simulation Time:      4.18 s
 Time Step:     10, Simulation Time:      4.64 s
 Time Step:     11, Simulation Time:      5.00 s

STOP: FDS completed successfully (CHID: wrf_time_ramp)