Setting up a simulation - UK-FVCOM-Usergroup/uk-fvcom GitHub Wiki
FVCOM
To compile the main code and its libraries:
- Open a terminal.
- Set up an MPI environment. On Fedora/Red Hat/CentOS, a
yum install mpichprovides the mpich environment which can be loaded withmodule load mpi/mpich-x86_64. - Change directory within which you have downloaded FVCOM and untar the code.
- Enter the
FVCOM_sourcedirectory. - Edit the
make.incfile to enable/disable different functionality e.g. wetting/drying to suit your requirements. - Change the
TOPDIRvariable to the path you are currently in (i.e. the output of thepwdLinux command). - Enter the
libssubdirectory. - Type
makeand wait for the compilation to complete. - Change back to the parent
FVCOM_sourcedirectory. - Type
makeand wait for the compilation to complete. - Copy the
fvcombinary to the your model input directory and change into that directory. - Launch the model with
mpirun:
mpirun -n $num_proc fvcom --casename test --dbg=0 --logfile=fvcom.log
where $num_proc is the number of processors.
The make.inc file contains a series of environment variables which may be of interest, specifically
compiler options. See lines 463-469. For gfortran, the following environment variables in make.inc work for Fedora.
#--------------------------------------------------------------------------
# MPIF90/GFORTRAN Compiler Definitions (PML)
#--------------------------------------------------------------------------
CPP = cpp
COMPILER = -DGFORTRAN
FC = mpif90
DEBFLGS =
OPT = -O3 -L/usr/lib64/mpich-x86_64/lib -I/usr/include/mpich-x86_64 -I/usr/lib64/gfortran/modules/
CLIB =
CC = mpicc
For our current HPC setup with Intel Fortran installed we use
#--------------------------------------------------------------------------
# Intel/MPI Compiler Definitions (PML)
#--------------------------------------------------------------------------
CPP = mpiicc -E
COMPILER = -DINTEL
FC = mpiifort
DEBFLGS = -profile-functions -profile-loops=All
OPT = -O3 -L/gpfs1/apps/intel/compilers_and_libraries/linux/mpi/intel64/lib -I/gpfs1/apps/intel/compilers_and_libraries/linux/mpi/intel64/include/ -xHost #-init=zero -init=arrays -ftrapuv
CLIB =
CC = mpiicc
CFLAGS =
Our debug options are:
#--------------------------------------------------------------------------
# Intel/MPI Compiler Definitions (PML) Debugging
#--------------------------------------------------------------------------
COMPILER = -DINTEL
CPP = mpiicc -E
CPPFLAGS = $(DEF_FLAGS) -P -traditional -DINTEL CPPMACH=-DNOGUI -I/gpfs1/apps/intel/compilers_and_libraries/linux/mpi/intel64/include/
FC = mpiifort
DEBFLGS = -g -traceback -warn -nofor_main -fp-model precise -traceback -fpe0 -keep
OPT = -O0 -I/gpfs1/apps/intel/compilers_and_libraries/linux/mpi/intel64/include/
OILIB = -L/gpfs1/apps/intel/compilers_and_libraries/linux/mkl/lib/em64t -Wl,-rpath=/gpfs1/apps/intel/compilers_and_libraries/linux/mkl/lib/em64t -i-static -L/gpfs1/apps/intel/compilers_and_libraries/linux/mpi/intel64/lib -lmpi -libverbs
#--------------------------------------------------------------------------
If a serial compilation is required the options, using Intel fortran compiler are:
#--------------------------------------------------------------------------
# Intel Compiler Definitions (PML) Serial debugging
#--------------------------------------------------------------------------
COMPILER = -DINTEL
CPP = icc
CPPFLAGS = $(DEF_FLAGS)
FC = ifort
DEBFLGS = -g -traceback -warn -nofor_main -fp-model precise -traceback -fpe0 -keep
OPT = -O0
OILIB = -L/gpfs1/apps/intel/compilers_and_libraries/linux/mkl/lib/em64t -Wl,-rpath=/gpfs1/apps/intel/compilers_and_libraries/linux/mkl/lib/em64t
#--------------------------------------------------------------------------
Or a more simple version,
#--------------------------------------------------------------------------
COMPILER = -DINTEL
CPP = cpp
CPPFLAGS = $(DEF_FLAGS)
FC = ifort
DEBFLGS = -g -traceback -warn
OPT = -O0
OILIB =
#--------------------------------------------------------------------------
For some reason, mpiifort installation in our HPC doesn't show lines when hitting segmentation fault errors. The only way we have managed to get some information of value is when using a serial compilation. One can also use the gnu debugger gdb as :
gdb --args ./bin/fvcom --casename=tapas_v0
Non-hydrostatic and semi-implicit FVCOM
Dependencies
For non-hydrostatic, semi-implicit, data assimilation, Kalman filters and wave-current interaction, FVCOM requires PETSc and HYPRE. FVCOM is written against version 2.3.3 of the PETSc library; PETSc version 3.x will not work with FVCOM. In turn, PETSc depends on HYPRE, and we have had success using HYPRE version 2.0.0.
Compilation
In your make.inc, add a new section after the TOPDIR declaration, which includes two new variables:
#--------------------------------------------------------------------------
# PETSC library locations (for non-hydrostatic/semi-implicit/data assimilation)
#--------------------------------------------------------------------------
PETSC_LIB = -L$(PETSC_DIR)/lib/linux-gnu-intel/
PETSC_FC_INCLUDES = -I$(PETSC_DIR) -I$(PETSC_DIR)/bmake/$(PETSC_ARCH) -I$(PETSC_DIR)/include
Ensure that your PETSc installation sets the PETSC_DIR and PETSC_ARCH environment variables as the root of your PETSc installation (if you compiled PETSc yourself, PETSC_DIR is the path you specified with the --prefix in configure.py). If PETSC_ARCH is undefined, you should be able to identify valid values by looking in $PETSC_DIR/lib/; valid values will be the names of the directories in that directory.
Then, for non-hydrostaic, edit your make.inc to uncomment the FLAG_30 = -DNH line and the include $(PETSC_DIR)/bmake/common/variables below as well as the FLAG_9 = -DSEMI_IMPLICIT line. For other options (non-hydrostatic, semi-implicit, data assimilation, Kalman filters and wave-current interaction), uncomment the relevant FLAGs.
Finally, append the PETSC_LIB and PETSC_FC_INCLUDES variables to the LIBS and INCLUDES definitions at the end of make.inc:
LIBS = $(LIBDIR) $(CLIB) $(PARLIB) $(IOLIBS) $(DTLIBS)\
$(MPILIB) $(GOTMLIB) $(KFLIB) $(BIOLIB) \
$(OILIB) $(VISITLIB) $(PROJLIBS) $(PETSC_LIB)
INCS = $(INCDIR) $(IOINCS) $(GOTMINCS) $(BIOINCS)\
$(VISITINCPATH) $(PROJINCS) $(DTINCS) \
$(PETSC_FC_INCLUDES)
To compile FVCOM, type make as usual.
FVCOM-GOTM
Installation guidance for GOTM can be found on the http://gotm.net/index.php?go=software&page=installation.
These instructions assume you are using the Intel Fortran compiler (ifort).
- Download and extract the GOTM archive, following the instructions at the website above.
- Open a terminal window and change directory into the GOTM source code and type the following:
export NETCDFINC=/YOUR/FVCOM/LIBS/DIR/libs/install/include
export NETCDFLIBNAME=/YOUR/FVCOM/LIBS/DIR/libs/install/lib/libnetcdf.a
export GOTMDIR=$(pwd)/gotm-4.0.0
export FORTRAN_COMPILER=IFORT
cd src
make
Replace /YOUR/FVCOM/LIBS/DIR with the value of $TOPDIR in the main FVCOM make.inc.
- This should generate output similar to that described on the website above, and an executable file called
gotm_prod_IFORT - Test your GOTM build by downloading and running a test case from the GOTM website.
To link GOTM with FVCOM, you will need to make the following changes to your FVCOM make.inc:
FLAG_11 = -DGOTM
GOTMLIB = -L/YOUR/GOTM/DIR/gotm-4.0.0/lib/IFORT/ -lturbulence_prod -lutil_prod -lmeanflow_prod
GOTMINCS = -I/YOUR/GOTM/DIR/gotm-4.0.0/modules/IFORT/
As before, adjust /YOUR/GOTM/DIR/ with the directory which contains the GOTM build.
Set the following option in your .nml file:
BOTTOM_ROUGHNESS_TYPE = 'gotm'
And finally, include your GOTM inputs in a file casename_gotmturb.inp, in the same directory as your other FVCOM inputs.
FVCOM-FABM
FVCOM-FABM enables access to a range of biogeochemical models that are currently included within the FABM repository.
To build FVCOM-FABM, you first compile FABM and then link FVCOM to it. To download the FVCOM-FABM code, download the FABM-ERSEM branch from the UK-FVCOM group GitHub repository.
FABM
- Download FABM from https://github.com/fabm-model/fabm.
- Extract the FABM source code (the example code below extracts the code to
$HOME/Code/fabm/src) - If you want to use FVCOM-FABM with ERSEM, download the stable code and extract the source code (the example code below extracts the code to
$HOME/Code/ersem) - Make sure you have CMake 2.8.8 or higher installed.
- Compile FABM:
cd $HOME/Code/fabm/src
mkdir build
cd build
cmake $HOME/Code/fabm/src -DFABM_HOST=fvcom -DFABM_ERSEM_BASE=$HOME/Code/ersem -DCMAKE_Fortran_COMPILER=$(which mpif90)
make install
Omit the -DFABM_ERSEM_BASE=... switch if you are not using ERSEM.
To enable a debugging build, change the cmake command to the following:
cmake $HOME/Code/fabm/src -DFABM_HOST=fvcom -DFABM_ERSEM_BASE=$HOME/Code/ersem -DCMAKE_Fortran_COMPILER=$(which mpif90) -DCMAKE_BUILD_TYPE=debug -DCMAKE_Fortran_FLAGS_DEBUG="-g -traceback -check all"
In the above, the value of -DCMAKE_Fortran_FLAGS_DEBUG is specific to the Intel Fortran compiler; if you use another compiler, replace it with flags appropriate for debugging with that compiler.
By default, FABM is built with double precision. This is appropriate if you are using the flag -DDOUBLE_PRECISION when you compile FVCOM. If you intend to use single precision instead, you need to add -DFABM_REAL_KIND='SELECTED_REAL_KIND(6)' in the call to cmake.
FVCOM with FABM support
To compile FVCOM with FABM support, edit the make.inc FLAG_25 as follows and compile FVCOM as normal. If you have installed FABM to a custom location, adjust the BIOLIB and BIOINCS paths as necessary.
# Online configuration
FLAG_25 = -DFABM
BIOLIB = -L$(HOME)/local/fabm/fvcom/lib -lfabm
BIOINCS = -I$(HOME)/local/fabm/fvcom/include
To enable offline FVCOM-ERSEM runs, change FLAG_25 to -DFABM -DOFFLINE_FABM.