Converting Torque Job Scripts to SLURM
This page is intended for users who already know SLURM basics and want to convert their existing Torque job scripts.
It focuses only on the differences you need to care about, and on using the pbs2slurm
tool to automate much of the work.
Why Conversion is Needed
Torque job scripts use #PBS
directives, Torque-specific environment variables, and resource options like ppn
.
SLURM uses #SBATCH
directives, different environment variables, and separates concepts like tasks and CPUs per task.
Directly running Torque scripts under SLURM will not work.
Header Conversion
The biggest change is the job headers.
Header Conversion
The largest change is that Torque used #PBS
headers, while SLURM uses #SBATCH
.
Purpose | Torque Example | SLURM Equivalent (default) |
---|---|---|
Job name | #PBS -N myjob |
#SBATCH --job-name=myjob |
Nodes/CPUs | #PBS -l nodes=2:ppn=16 |
#SBATCH --nodes=2 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=16 |
Walltime | #PBS -l walltime=02:00:00 |
#SBATCH --time=02:00:00 |
Memory | #PBS -l mem=8gb |
#SBATCH --mem=8G |
Output file | #PBS -o job.out |
#SBATCH --output=job.out |
Error file | #PBS -e job.err |
#SBATCH --error=job.err |
Billing account | #PBS -A research |
#SBATCH --account=research |
Note
- The table above shows the default translation for threaded (OpenMP/pthreads) jobs: one process with many CPUs.
- If your job is MPI-based, you should instead request multiple tasks per node:
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=16 #SBATCH --cpus-per-task=1
pbs2slurm
defaults to the threaded style, and will add a comment whenppn
is defined, reminding you to check if your workload is MPI.
Environment Variable Mapping
Update these references in your scripts:
Purpose | Torque Variable | SLURM Variable |
---|---|---|
Submit directory | $PBS_O_WORKDIR |
$SLURM_SUBMIT_DIR |
Job ID | $PBS_JOBID |
$SLURM_JOB_ID |
Job name | $PBS_JOBNAME |
$SLURM_JOB_NAME |
Node file | $PBS_NODEFILE |
$SLURM_NODELIST |
Process count | $PBS_NP |
$SLURM_NTASKS |
Interactive Jobs
- Torque:
qsub -I -l nodes=1:ppn=4,walltime=01:00:00
- SLURM:
salloc -A research --nodes=1 --ntasks=4 --time=01:00:00
Tip
In SLURM, once you have an allocation with salloc
, launch programs inside it with srun
.
Billing Accounts
New to SLURM, we require a billing account.
- SLURM:
#SBATCH --account=research
in the script, orsbatch -A research job.sh
You can set defaults so you don’t have to type -A
every time:
export SBATCH_ACCOUNT=research
export SRUN_ACCOUNT=research
export SALLOC_ACCOUNT=research
Using the pbs2slurm
Converter
We provide a utility, pbs2slurm
, that automatically translates most Torque headers into SLURM.
Usage
pbs2slurm -h
usage: pbs2slurm [-h] [-q] [-s SHELL] [-d] [-V] infile
positional arguments:
infile PBS/MOAB input file to convert.
options:
-h, --help show this help message and exit
-q, --quiet Disable conversion instructions in output.
-s SHELL, --shell SHELL
Shell to use for script if it doesn't already have one.
-d, --debug Display some debug messages
-V show program's version number and exit
Example
pbs2slurm myjob.pbs > myjob.sh
This produces a new SLURM script (myjob.sh
) with #SBATCH
directives and updated environment variables.
If pbs2slurm
encounters an option it cannot convert, it will insert a comment in the output for manual review.
Warning
pbs2slurm
is a helper, not a guarantee. Always read the converted script before running it.
Complex Torque directives (GPU requests, node properties, job arrays) may require manual adjustment.
Common Gotchas When Converting
- ppn: Torque used
ppn
(“processors per node”). Replace with--ntasks
and/or--cpus-per-task
. - Output/error handling: Torque combined stdout/stderr unless separated; SLURM keeps them separate.
- Node list: Torque wrote explicit hostnames to
$PBS_NODEFILE
. SLURM gives$SLURM_NODELIST
(range format). - tracejob: Torque’s
tracejob
has no SLURM equivalent. Usesacct
for completed jobs.
Next Steps
- Run your existing Torque scripts through
pbs2slurm
. - Review and fix any TODOs or flagged lines.
- Submit a small test job under SLURM.
- Compare results with your old Torque workflow.
For more details, see the SLURM usage guide.