Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

On Slurm on Atos AD complex was updated to version 22.05. Since AD has been the default cluster with hpc-login and hpc-batch being aliases for nodes on AD complex.

The same version of Slurm 22.05 has also been installed on AA and AB complexes and will be installed on AC complex on  . Other two complexes (AA and AB) are to be updated at latter stage. 


One change in the new Slurm version impacts all batch jobs setting the number of OMP threads with directive:

Code Block
themeConfluence
#SBATCH --cpus-per-task

According to Following recommendation from the official release note:

Panel
borderColorblack
borderStylesolid
titleSlurm 22.05:

Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.

we strongly suggest to all Atos HPC users need to adjust the jobs script by exporting SRUN_CPUS_PER_TASK environmental variable manually:

Code Block
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

or alternatively by adjusting their job scripts by specifying number of OMP threads as an option with "srun" command:

Code Block
srun --cpus-per-task

To reduce user impact and made old job scripts compatible with the new Slurm, HPC support team has set SRUN_CPUS_PER_TASK environmental variable :

Code Block
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

in the user profile. However, user profile is automatically loaded only in batch jobs (jobs with first line):

Code Block
#!/bin/bash

In ksh, sh, and any other job type user profile needs to be sourced manually in the script to benefit from the patch created by the HPC support team:

Code Block
. /etc/profile


${SLURM_CPUS_PER_TASK: