System specific Builds - SCECcode/ucvm GitHub Wiki
The UCVM software is used on a variety of computer platforms, from web-browsers to laptops to supercomputers. For overviews of the registered velocity models, the SCEC Community Model website may be easiest to use. For use on laptops, the UCVM Docker images may be easiest to use. For large-scale users, that want to query millions of data points, or build large-scale velocity meshes, users will want to build UCVM on a Linux servers or a supercomputers. UCVM has MPI libraries for building large velocity meshes in parallel.
On system that include MPI libraries, the UCVM installation script will detect the MPI libraries and will build the UCVM MPI utilities. If these MPI executables are built, they will be installed in the UCVM bin directory. As an example, here are some of the executable built on an system (Discovery.usc.edu) with MPI libraries loaded, showing the executables ending in _mpi.
ssh_generate ssh_merge vs30_query vs30_query_mpi basin_query basin_query_mpi basin_query_mpi_complete
The following section describes details about building UCVM on specific Linux systems, and supercomputers.
Step-by-step algorithm for building UCVM is described in the create_ucvm_docker Dockerfile. The Dockerfile used for the current release build UCVM using a version of rocklinux:8.5, then loading libraries that include gcc, gcc-fortran, fftw-devel, and python38.
The UCVM Docker build process has an added complexity because it includes the UCVM plotting libraries. The core UCVM install script uses python3, but the UCVM plotting libraries still use python2. So the UCVM Dockerfile installs both versions of python, and it is configured so that python2 is the default version when the Docker image is run. This is because python3 is only required by the UCVM installation script, and python2 is required to run the plotting routines.
The UCVM MPI libraries are not built in the UCVM Docker images.
UCVM is tested on the USC Discovery system. Discovery is Linux cluster that uses a module load system to manage the libraries available by application programs built and run on the system.
A set of module load commands that is used to build UCVM on Discovery is given below. The LD_PRELOAD export is a system specific command required due to how Discovery libraries are managed. This may not be required on other systems.
module purge module load gcc/8.3.0 module load openmpi/4.0.2 module load pmix/3.1.3 export LD_PRELOAD=/spack/apps/gcc/8.3.0/lib64/libstdc++.so.6 # source /project/scec_608/maechlin/ucvm227/conf/ucvm_env.sh
Also UCVM requires a command related, I believe, to the anaconda python library.
unset PROJ_LIB
SDSC Expanse is a linux cluster that uses a module load system. The available libraries are somewhat different than the libraries avialable on Discovery. To build UCVM on Expanse, the following module load commands are configured in the users .bashrc. This will build the MPI executable on Expanse.
module purge module load cpu/0.15.4 module load gcc/10.2.0 module load openmpi/4.0.4 unset PROJ_LIB unset CC
The .bashrc should also source the ucvm_env.sh shell script to setup the required LD paths for ucvm:
source /expanse/lustre/projects/ddp408/ux454496/ucvm_22_7/conf/ucvm_env.sh
The MPI acceptance tests require an environment variable set like this on Expanse:
export UCVM_SALLOC_ENV="--partition=debug --account=ddp408 --mem=16G"
Then run the tests this way:
salloc --partition=debug --account=ddp408 --nodes=1 --ntasks-per-node=4 --mem=8G -t 00:10:00 srun -Q -o ${TEST}.srun.out ${BIN_DIR}/basin_query_mpi -b ${TEST}.simple -f ${CONF_DIR}/ucvm.conf -m cencal,cvmsi -i 20 -v 2500 -l 35.0,-122.5 -s 0.1 -x 16 -y 11
UCVM is used to build large velocity meshes used by the CyberShake workflow system. Cybershake uses the UCVM C language API.