Torque to Slurm - nthu-ioa/cluster GitHub Wiki
Important differences between Slurm and Torque
-
What Torque calls "queues" (
-q
), Slurm calls "partitions" (-p
); the option-q
tosrun
does something else. -
sbatch
does not source shell config files like~/.bashrc
or~/.profile
, butsrun
does. -
By default, the commands in
sbatch
see an environment inherited from the submitting shell. If you want a clean environment, use the option#SBATCH --export=NONE
in your job script. If you just want a clean set of modules, remember to callmodule purge
in your script. -
With Slurm, the working directory when the job starts is the directory of the job script (not the current working directory when you submit, not your home directory).
-
If Slurm log file (
--output
) paths are not qualified with a directory name, they will end up in the current working directory. -
Slurm can understand torque scripts (with #PBS options). It's probably better not to rely on this, especially if your #PBS options are complicated.
Other guides
https://arc-ts.umich.edu/migrating-from-torque-to-slurm/
PDF file with one-to-one translation of commands: https://slurm.schedmd.com/rosetta.pdf
https://hpcc.usc.edu/support/documentation/pbs-to-slurm/
Old instructions for Torque
These are kept for reference...
Submitting jobs
All computation jobs have to be submitted through the qsub
command. Direct ssh to compute nodes is prohibited.
Queues
-q cpu
: for using CPU nodes-q mem
: for using Memory nodes-q gpu
: for using GPU nodes
Submit an interactive Job
qsub -I -X -N job_name -q cpu -l nodes=1:ppn=8,pmem=24gb,walltime=1:00:00
Example Job script
1 #!/bin/bash -x
2 #PBS -N s60_2d_v
3 #PBS -l nodes=1:ppn=12
4 #PBS -l mem=24gb
5 #PBS -l walltime=12:00:00
6 #PBS -k oe
7 #PBS -j oe
8
9 n_proc=$(cat $PBS_NODEFILE | wc -l)
10
11 module load pgi/18.10
12 module load openmpi/3.1.4
13 module load hdf5-parallel/1.8.21
14
15 cd $PBS_O_WORKDIR
16
17 mpirun -np $n_proc ./flash4