You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »


On Slurm on Atos AD complex was updated to version 22.05. Since AD has been default cluster with hpc-login and hpc-batch being aliases for nodes on AD complex.

The same version of Slurm 22.05 will be installed on AC complex on  . Other two complexes (AA and AB) are to be updated at latter stage.


One change in the new Slurm version impacts all batch jobs setting the number of OMP threads with directive:

#SBATCH --cpus-per-task

According to the official release note:

Slurm 22.05:

Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.

Atos HPC users need to adjust their job scripts by exporting SRUN_CPUS_PER_TASK environmental variable manually:

export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

or alternatively by specifying number of OMP threads as an option with "srun" command:

srun --cpus-per-task


To reduce user impact and made old job scripts compatible with the new Slurm, HPC support team has set SRUN_CPUS_PER_TASK environmental variable :

export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}

in the user profile. However, user profile is automatically loaded only in batch jobs (jobs with first line):

#!/bin/bash

In ksh, sh, and any other job type user profile needs to be sourced manually in the script to benefit from the patch created by the HPC support team:

. /etc/profile



${SLURM_CPUS_PER_TASK:


  • No labels