You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Job Directives

Any shell script can be submitted as a Slurm job with no modifications. In such a case, sensible default values will be applied to the job. However, you can configure the script to fit your needs through job directives. In Slurm, these are just special comments in your script, usually at the top just after the shebang line, with the form:

#SBATCH option=value

Note that these directives:

  • start with the #SBATCH prefix
  • are always lowercase
  • have no spaces in between.
  • don't expand shell variables (they are just shell comments)

A Slurm job might look like the following:

#!/bin/bash
# The job name
#SBATCH --job-name=helloworld
# Set the error and output files
#SBATCH --output=hello-%J.out
#SBATCH --error=hello-%J.out
# Set the initial working directory
#SBATCH --workdir=/scratch/us/usxa
# Choose the queue
#SBATCH -–qos=express
# Wall clock time limit
#SBATCH --time=00:05:00
# Send an email on failure
#SBATCH --mail-type=FAIL
# This is the job
echo “Hello World!”
sleep 30

This table describes the most common options you can use in a Slurm job:

DirectiveDescriptionDefault
--job-name=...A descriptive name of the jobScript name
--output=...      Path to the file where standard output is redirected. Special placeholders for job id (%j) and the execution node (%N)slurm-%j.out
--error=...Path to the file where standard error is redirected. Special placeholders for job id (%j) and the execution node (%N)output value
--workdir=...Working directory of the job. The output and error files can be defined relative to this directorysubmitting directory
--qos=...Quality of Service (or queue) where the job is to be submittednormal
--time=...

Wall clock limit of the job. Note that this is not cpu time limit

The format can be: m, m:s, h:m:s, d-h, d-h:m or d-h:m:s

qos default time limit
--mail-type=...Notify user by email when certain event types occur. Valid values are: BEGIN, END, FAIL, REQUEUE and ALLdisabled
--mail-user=...email address to send the emailsubmitting user
--ntasks=..Allocate resources for the specified number of parallel tasks. Note that a job requesting more than one must be submitted to a parallel queue. There might not be any parallel queue configured on the cluster1

You can also use these options as command line arguments to sbatch.

Job variables

Inside a job,  you can benefit from some variables defined by SLURM automatically. Some examples are:

  • SLURM_JOBID
  • SLURM_NODELIST
  • SLURM_SUBMIT_DIR

Job arrays

This option is not available on the current installation on ecgate

Job arrays offer a mechanism for submitting and managing collections of similar jobs quickly and easily. The array index values are specified using the --array or -a option of the sbatch command. The option argument can be specific array index values, a range of index values, and an optional step size as shown in the examples below. Jobs which are part of a job array will have the environment variable SLURM_ARRAY_TASK_ID set to its array index value.

# Submit a job array with index values between 0 and 31
$ sbatch --array=0-31    -N1 tmp
# Submit a job array with index values of 1, 3, 5 and 7
$ sbatch --array=1,3,5,7 -N1 tmp
# Submit a job array with index values between 1 and 7
# with a step size of 2 (i.e. 1, 3, 5 and 7)
$ sbatch --array=1-7:2   -N1 tmp

The --array option can also be used inside the job script as a job directive. For example:

#!/bin/bash
#SBATCH --job-name=my_job_array
#SBATCH --array=0-31

echo “Hello World! I am task $SLURM_ARRAY_TASK_ID of the job array”
sleep 30

  • No labels