Vlasiator on Turso (UH local cluster) - fmihpc/vlasiator GitHub Wiki
The Vlasiator-team-only login host, turso-carrington.helsinki.fi
, is only available inside the university network. To access it from the outside, you need to go via a jump host. To make matters easy, you can add a block like this to your ~/.ssh/config
(create the file if it does not exist yet):
Host turso
ProxyJump login.physics.helsinki.fi
Hostname turso-carrington.helsinki.fi
User <universityUsername>
Remember to use ssh-copy-id <user>@<host>
to copy your ssh-key to a target machine, enabling passwordless logins. The ProxyJump is the easiest way to pass through one or more jump hosts, but for a fully passwordless login, ssh-key needs to be copied for each proxy host.
If you get messages of the module command not being supported, add the following to your ~/.bashrc
:
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
As of May 2025, the following applies.
Building should be done in an interactive session. For different partitions in the cluster, you will need different modules and Vlasiator architectures selected. Below are some examples. Interactive session launcher commands can be experimented with and the core count, memory, partition and wall time can be edited to suit your needs. The commands below are expected to be called from a Vlasiator git repository, with submodules initialized, downloaded, and updated as required.
Note: The most recent modules to use on each system are listed at the top of the corresponding Makefile
within the MAKE
subdirectory.
srun -M carrington -c 32 --mem=32G --time=0:30:00 --interactive --pty bash
export VLASIATOR_ARCH=carrington_gcc_openmpi
module purge
module load GCC/13.2.0; module load OpenMPI/4.1.6-GCC-13.2.0 ; module load PMIx/4.2.6-GCCcore-13.2.0; module load PAPI/7.1.0-GCCcore-13.2.0
make -j 32
srun -M ukko -p gpu-oversub --mem-per-cpu=2G -t 0:30:00 --interactive --nodes=1 -n 1 -c 10 --pty bash
export VLASIATOR_ARCH=ukko_dgx
export VLASIATOR_ARCH=ukko_a100
module purge
module load OpenMPI/4.1.6.withucx-GCC-13.2.0 PAPI/7.1.0-GCCcore-13.2.0 CUDA/12.6.0
make -j 10
To request a GPU from Ukko, you can use various srun arguments, run e.g.
/usr/bin/srun --interactive -n1 -c8 --mem=4G -t00:15:00 -Mukko --constraint=v100 -pgpu --pty bash
This requests 1 node with 8 CPU cores, 4GB of memory, and 1 V100 GPU, from the GPU partition on Ukko, for 15 minutes. You can alternatively request A100 and P100 GPUs on this partition. Detailed partition information here. Note that the gpu-oversub
partition has shared access to a single A100 card, and the regular gpu
partition when using A100s can have long queue times and bad thread placement. Thus, use of HILE-G or Ukko-dgx is preferred.
HILE is a LUMI-lite system with its own login nodes at hile01.it.helsinki.fi
. Hile utilizes "flavors" in the partition system, requiring use of the -C
flag for SBATCH
. The provided HILE functionality utilizes the library building scripts instead of relying on a central repository.
Use of #SBATCH --distribution=block:block
may improve performance
./fetch_libraries.sh hile_cpu
srun -C c -c 16 --mem=20G --pty bash
module load papi
module load cray-pmi
module load libfabric/1.22.0
export VLASIATOR_ARCH=hile_cpu
./build_fetched_libraries.sh hile_cpu
make -j 16
./fetch_libraries.sh hile_gpu
srun -C g -c 16 --mem=20G --pty bash
module load papi
module load rocm/6.2.0
module load cray-pmi
module load craype-accel-amd-gfx90a
module load libfabric/1.22.0
export MPICH_GPU_SUPPORT_ENABLED=1
export VLASIATOR_ARCH=hile_gpu
./build_fetched_libraries.sh hile_gpu
make -j 16
module load papi
module load cray-pmi
module load libfabric/1.22.0
module load gdb4hpc
gdb4hpc
launch $vla{16} --launcher-args="--exclusive --nodes=1 -c 8 -n 16 -C c --mpi=pmi2" ./vlasiator -a --run_config=Example.cfg
IMPORTANT: Place your executables/binaries in $PROJ
(/proj/username/
) or a subdirectory of that. Some applications SEGFAULT if binary is striped over multiple OST's. Alternatively, you can create a stripe=1 directory:
lfs setstripe -c 1 dir_for_binary
However, $PROJ
is a better place for the binary.
Run your jobs in the work area: For Carrington and Ukko, that is /wrk-vakka/users/username
. For HILE, that is /wrk-kappa/users/username
. These are subject to change with migration of UH CS services to Viikki.
Here is an example job script for Carrington. For other machines, see scripts in the testpackage
directory.
#!/bin/bash
#SBATCH --time=0-00:10:00 # Run time (d-hh:mm:ss)
#SBATCH --job-name=Vlas_jobname
#SBATCH -M carrington
#SBATCH -p short
#SBATCH --exclusive
#SBATCH --nodes=1
#SBATCH -c 4 # CPU cores per task
#SBATCH --ntasks-per-node=16
#SBATCH --hint=multithread
#SBATCH --mem-per-cpu=5G # memory per core
#Carrington has 2 x 16 cores per node, plus hyperthreading
ht=2
t=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$t
module purge
module load GCC/13.2.0
module load OpenMPI/4.1.6-GCC-13.2.0
module load PMIx/4.2.6-GCCcore-13.2.0
module load PAPI/7.1.0-GCCcore-13.2.0
export UCX_NET_DEVICES=eth0 # This is important for multi-node performance!
executable="/proj/username/vlasiator"
configfile="./Magnetosphere_small.cfg"
umask 007
cd $SLURM_SUBMIT_DIR
wait
srun --mpi=pmix_v3 -n 1 -N 1 $executable --version
srun --mpi=pmix_v3 -n $SLURM_NTASKS -N $SLURM_NNODES $executable --run_config=$configfile
It is also possible to do debugging by logging into nodes where Vlasiator is running. Some of these commands may prove helpful:
srun -M ukko --overlap --pty --jobid=$SLURM_JOBID bash
srun --jobid=$SLURM_JOBID --nodelist=node_name -N1 --pty /bin/bash
You can pass -w to select multiple nodes, as in
srun --overlap --pty --jobid=$SLURM_JOBID -w $( squeue --jobs $SLURM_JOBID -o "%N" | tail -n 1 )
Zoltan and Boost need to be built for GCC. For other required libraries, follow the standard Vlasiator install instructions. This building has already been done and users should be able to directly use the libraries found in /proj/groups/spacephysics
. These instructions are in case they need to be rebuilt for some reason. Also see the library building scripts in the Vlasiator repo and the regular instructions for Installing Vlasiator.
First download and unpack the source code
wget http://cs.sandia.gov/Zoltan/Zoltan_Distributions/zoltan_distrib_v3.83.tar.gz
wget http://freefr.dl.sourceforge.net/project/boost/boost/1.69.0/boost_1_69_0.tar.bz2
tar xvf zoltan_distrib_v3.83.tar.gz 2> /dev/null
tar xvf boost_1_69_0.tar.bz2
Make and install
mkdir zoltan-build
cd zoltan-build
../Zoltan_v3.83/configure --prefix=/proj/group/spacephysics/libraries/gcc/8.3.0/zoltan/ --enable-mpi --with-mpi-compilers --with-gnumake --with-id-type=ullong
make -j 8
make install
Clean up
cd ..
rm -rf zoltan-build Zoltan_v3.83
Make and install
cd boost_1_69_0
./bootstrap.sh
echo "using gcc : 8.3.0 : mpicxx ;" >> ./tools/build/src/user-config.jam
echo "using mpi : mpicxx ;" >> ./tools/build/src/user-config.jam
./b2 -j 8
./b2 --prefix=/proj/group/spacephysics/libraries/gcc/8.3.0/boost/ install
Clean up
cd ..
rm -rf boost_1_69_0
For basic compiling and running CUDA applications you need to load a CUDA module (citation needed), for instance with the following commands:
module purge
module load GCC/10.2.0
module load CUDA/11.1.1-GCC-10.2.0
This loads GCC version 10.2.0 and CUDA version 11.1.1 for GCC 10.2.0.
Two versions of HIP have been built for Ukko, v4.5 and v5.2. These can be used by running
module use /proj/group/spacephysics/modules/hip
module load hip-[4.5|5.2]
This module will load all prerequisite modules needed by HIP and set the correct environment variables for compilation. To compile, run
hipcc $(HIP_IFLAGS) <rest of the compiler options and filenames>
The HIP_IFLAGS
environment variable contains the flag -I
and the path to the HIP header files.