Running A Basic Slurm Script - itiger-cluster/itiger-cluster.github.io GitHub Wiki
Running jobs on HPC requires submitting your jobs to the job scheduler as either Batch (through a submission script) or Interactively. The scheduler will find the resources you requested and will execute your job on those resource when they become available.
- Partition ('-partition', 'p')
- CPUs, tasks, or nodes: 'cpus-per-task' or '-c' for multiple threads/cores per node/task (pthreads/OpenMP): '-ntasks' or '-n' for multiple message passing tasks (MPI); '-nodes' or '-N' for multiple nodes (MPI)
- Time ('-time' or '-t')
- Memory ('mem-per-cpu' for memory per CPU core; '-mem' for memory per node) Useful options
- Job name ('-job-name' or '-J') for identification
- Output and error ('-output' or '-o' and '-error' or '-e') to redirect script standard output and error ('stdout' and 'stderr')
- Generic RESource (-gres') used for gpus, licenses, and interconnects
Here's an example of how your bash script should look like:
