Several MPI implementations are available on the Atos HPCF: Mellanox HPC-X (OpenMPI based), and OpenMPI (provided by Atos).
They are not compatible amongst them, so you should only use one to build your entire software stack. Support libraries are provided for the different flavours to guarantee maximum compatibility.
Recommended MPI implementation
hpcx-openmpi
Building your MPI programs
First, you need to decide which compiler family and MPI flavour you will use, and load them with modules.
$ module load prgenv/gnu hpcx-openmpi $ module list Currently Loaded Modules: 1) gcc/11.4.1 2) prgenv/gnu 3) hpcx-openmpi/2.9.0
$ module load prgenv/nvidia hpcx-openmpi The following have been reloaded with a version change: 1) prgenv/gnu => prgenv/nvidia $ module list Currently Loaded Modules: 1) nvidia/25.5 2) prgenv/nvidia 3) hpcx-openmpi/2.21.3
Then, you may use the usual MPI compiler wrappers to compile your programs:
Language | OpenMPI (including HPC-X) |
---|---|
C | mpicc |
C++ | mpicxx |
Fortran | mpifort |
Running your MPI programs
You should run your MPI programs in a Slurm batch script, using srun
to start your MPI execution. Srun inherits it's configuration from the job set up, so no extra options such as number of tasks or threads need to be passed. It is only necessary if wishing to run an MPI execution with a different (smaller) configuration within the same job.
Depending on the implementation, you may also use the corresponding mpiexec
command, but its use is discouraged.