Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


On November XX Slurm on Atos AD complex was updated with Slurm to version 22.05 and the same version of Slurm was installed on AC complex on  .. Since AD has been the default cluster (with hpc-login and hpc-batch are being aliases for nodes on AD ) since 28 Nov complex.

The same version of Slurm 22.05 has also been installed on AA and AB complexes and will be installed on AC complex on  . 


The One change to in the new Slurm version impacts all Slurm batch jobs setting the number of OMP threads using threads with directive:

Code Block
languagethemebashConfluence
#SBATCH --cpus-per-task

According to Following recommendation from the official release note:

Panel
borderColorblack
borderStylesolid
titleSlurm 22.05 release note:

Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.

Consequently, users should adjust the jobs script by exporting SRUN_CPUS_PER_TASK environmental variable manually:

Code Block
languagebash
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

or by we strongly suggest to all Atos HPC users adjusting their job scripts by specifying number of OMP threads as an option with run "srun" command:

Code Block
languagebash
srun --cpus-per-task

To reduce user impact HPC support team has set:

Code Block
languagebash
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

to the user profile. However, user profile is automatically loaded only in batch jobs (with first line):

Code Block
languagebash
#!/bin/bash

In ksh, sh, and any other job type, one need to call user profile manually to benefit from the patch created by the HPC support team.

Code Block
languagebash
. /etc/profile


${SLURM_CPUS_PER_TASK: