Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

On Slurm on Atos AD complex was updated to version 22.05. Since AD has been the default cluster with hpc-login and hpc-batch being aliases for nodes on AD complex.

The same version of Slurm 22.05 was has also been installed on AA and AB complexes and will be installed on AC complex on  . 


One change in the new Slurm version impacts all batch jobs setting the number of OMP threads with directive:

Code Block
languagebash
themeConfluence
#SBATCH --cpus-per-task

According to Following recommendation from the official release note:

Panel
borderColorblack
borderStylesolid
titleSlurm 22.05 release note:

Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.

Users should adjust the jobs script by exporting SRUN_CPUS_PER_TASK environmental variable manually:

Code Block
languagebash
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

or alternatively we strongly suggest to all Atos HPC users adjusting their job scripts by specifying number of OMP threads as an option with run "srun" command:

bash
Code Block
language
srun --cpus-per-task

To reduce user impact and made job scripts compatible with the new Slurm, HPC support team has set SRUN_CPUS_PER_TASK environmental variable :

Code Block
languagebash
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

in the user profile. However, user profile is automatically loaded only in batch jobs (jobs with first line):

Code Block
languagebash
#!/bin/bash

In ksh, sh, and any other job type, one need to call user profile manually to benefit from the patch created by the HPC support team.

Code Block
languagebash
. /etc/profile


${SLURM_CPUS_PER_TASK: