On Slurm on Atos AD complex was updated to version 22.05. Since AD has been default cluster with hpc-login and hpc-batch being aliases for nodes on AD complex.
The same version of Slurm 22.05 was installed on AC complex on .
One change in the new Slurm version impacts all batch jobs setting the number of OMP threads with directive:
#SBATCH --cpus-per-task
According to the official release note:
Srun will no longer read in SLURM_CPUS_PER_TASK. This means you will implicitly have to specify --cpus-per-task on your srun calls, or set the new SRUN_CPUS_PER_TASK env var to accomplish the same thing.
Users should adjust the jobs script by exporting SRUN_CPUS_PER_TASK environmental variable manually:
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}
or alternatively by specifying number of OMP threads as an option with run command:
srun --cpus-per-task
To reduce user impact and made job scripts compatible with the new Slurm, HPC support team has set SRUN_CPUS_PER_TASK environmental variable :
export SRUN_CPUS_PER_TASK=${SLURM_CPUS_PER_TASK:-1}
in the user profile. However, user profile is automatically loaded only in batch jobs (jobs with first line):
#!/bin/bash
In ksh, sh, and any other job type, one need to call user profile manually to benefit from the patch created by the HPC support team.
. /etc/profile
${SLURM_CPUS_PER_TASK: