You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »


On November XX Atos AD complex was updated with Slurm 22.05 and the same version of Slurm was installed on AC complex on  .

AD has been default cluster (hpc-login and hpc-batch are nodes on AD) since  .


The change to the new version impacts all Slurm jobs setting the number of OMP threads using directive:

#SBATCH --cpus-per-task


According to the official release note:

Slurm 22.05 release note:

Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.

Consequently, users should adjust the jobs script by exporting SRUN_CPUS_PER_TASK environmental variable manually:

export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

or by specifying number of OMP threads as an option with run command:

srun --cpus-per-task


To reduce user impact HPC support team has set:

export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

to the user profile. However, user profile is automatically loaded only in batch jobs (with first line):

#!/bin/bash


In ksh, sh, and any other job type, one need to call user profile manually to benefit from the patch created by the HPC support team.

. /etc/profile



${SLURM_CPUS_PER_TASK:


  • No labels