Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

You can also use these options as command line arguments to sbatch.

General directives

account=<account>

-A <account>

DirectiveDescriptionDefault

--account=<account>

-A <account>

Project account for resource accounting and billing purposes.default project account for the user

--job-name=<name>

-J <name>

A descriptive name of the jobScript name
--chdir=...Working directory of the job. The output and error files can be defined relative to this directorysubmitting directory

--output=<path>

-o <path>

Path to the file where standard output is redirected. Special placeholders for job id (%j) and the execution node (%N)slurm-%j.out

--error=...

-e <path>

Path to the file where standard error is redirected. Special placeholders for job id (%j) and the execution node (%N)output value--chdir=...Working directory of the job. The output and error files can be defined relative to this directorysubmitting directory

--qos=<qos>

-q <qos>

Quality of Service (or queue) where the job is to be submitted. Check the available queues for the platform.normal

--time=<time>

-t <time>

Wall clock limit of the job. Note that this is not cpu time limit

The format can be: m, m:s, h:m:s, d-h, d-h:m or d-h:m:s

qos default time limit
--mail-type=<type>Notify user by email when certain event types occur. Valid values are: BEGIN, END, FAIL, REQUEUE and ALLdisabled
--mail-user=<email>email address to send the emailsubmitting user

--

Project account for resource accounting and billing purposes.default project account for the user

...

export=<vars>


Export variables to the job, comma separated entries of the form VAR=VALUE.

ALL means export the entire environment from the submitting shell into the job.

NONE means getting a fresh session.

  • On ECGATE and Linux Clusters: ALL
  • On Other platforms: NONE

Directives for resource allocation - non-serial jobs

Note

These directives are not available for ECGATE or Linux Clusters outside a parallel queue

...

DirectiveDescriptionDefault

--ntasks=<tasks>

-n <tasks>

Allocate resources for the specified number of parallel tasks. Note that a job requesting more than one must be submitted to a parallel queue. There might not be any parallel queue configured on the cluster1

--nodes=<nodes>

-N <nodes>

Allocate <nodes> number of nodes to the job1

--cpus-per-task=<threads>

--c <threads>

Allocate <threads> number of cpus for every task. Use for threaded applications.1

--ntasks-per-node=<tasks>

Allocate a maximum of <tasks> tasks on every node.node capacity

--threads-per-core=<threads>

Allocate <threads> threads on every core (HyperThreading)core thread capacity

--mem-per-cpu=<mem>

Allocate <mem> memory for each taskcore thread capacity


Tip

See man sbatch or https://slurm.schedmd.com/sbatch.html for the complete list of directives and their options.

Job variables

Inside a job,  you can benefit from some variables defined by SLURM automatically. Some examples are:

  • SLURM_JOBID
  • SLURM_NODELIST
  • SLURM_SUBMIT_DIR

Job arrays

Job arrays offer a mechanism for submitting and managing collections of similar jobs quickly and easily. The array index values are specified using the --array or -a option of the sbatch command. The option argument can be specific array index values, a range of index values, and an optional step size as shown in the examples below. Jobs which are part of a job array will have the environment variable SLURM_ARRAY_TASK_ID set to its array index value.

...

Tip

The --array option can also be used inside the job script as a job directive. For example:

Code Block
languagebash
#!/bin/bash
#SBATCH --job-name=my_job_array
#SBATCH --array=0-31

echo “Hello World! I am task $SLURM_ARRAY_TASK_ID of the job array”
sleep 30



Show If
groupecmwf

Job arrays or other multiple concurrent jobs using IDL

If you are running a job array or other multiple concurrent jobs on lxc that call IDL then it is good to constrain these to run on a small number of nodes to limit the number of IDL licences requested.  To do this add the --constraint=idl option to the scripts job directives:

Code Block
languagebash
#!/bin/bash
#SBATCH --job-name=my_idl_job_array
#SBATCH --array=0-31
#SBATCH --constraint=idl

echo “Hello World! I am task $SLURM_ARRAY_TASK_ID of the job array”
module load idl
idl << EOF
.run my_idl_program.pro
my_idl_program
EOF
sleep 30


...