SLURM_Commands - Karthikeyan-Lab-Caltech/Wiki GitHub Wiki
SLURM templates
Template Structure
TBD
Slurm Batch Basics
sbatch job.sh
- Submit Job
squeue $JOB_ID
- Monitor Status of qued or live job
sacct $JOB_ID
- Check history of past job (memory/runtime/etc.)
scancel $JOB_ID
- Cancel a submited job
Slurm Interactive Job Basics
srun --pty -t hh:mm:ss -n tasks -N nodes --mem=memory /bin/bash -l
- access reasources
exit
- end interactive job
Slurm Array Jobs
Slurm array jobs are a quick and relatively easy way to parallelize a repetitive Slurm job. The job will create multiple independent tasks, each using the same resources as specified.
The first change is the addition of a line to define how many independent jobs are desired:
#SBATCH --array=1-20
In this case, it creates 20 jobs.
The second change is adding a line to list all the input files:
fasta_files=(/path/*.fasta)
This example lists all .fasta files in the specified folder.
The third change is using the SLURM_ARRAY_TASK_ID variable to select one file per job:
fasta_file=${fasta_files[$((SLURM_ARRAY_TASK_ID - 1))]}
This ensures that each array task processes a different file. Then, by using ${fasta_file}
in your command, the Slurm job will execute on each file independently.