You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Parallel jobs run on the compute partition and use the np QoS for regular users.

This queue is not the default, so make sure you explicitly define it your job directives before submission. 

Parallel jobs are allocated exclusive nodes, so they will not share resources with other jobs.

Efficient use of resources

Make sure the job is configured to fully utilise all the computing resources. For small parallel executions you may want to consider using fractional jobs instead.

Affinity

See HPC2020: Affinity for more information on how to set up the cpu binding properly for your parallel runs

MPI application

To spawn an MPI application you must use srun

MPI job example
#!/bin/bash
#SBATCH --job-name=test-mpi
#SBATCH --qos=np
#SBATCH --ntasks=512
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=100
#SBATCH --output=test-mpi.%j.out
#SBATCH --error=test-mpi.%j.out
#SBATCH --chdir=/scratch...

srun my_mpi_app

The example above would run a 512 task MPI application

Hybrid MPI + OpenMP

To spawn an MPI application you must use srun

This example runs a hybrid application spawning 128 MPI tasks, with each one of them opening up 4 threads.

MPI job example
#!/bin/bash
#SBATCH --job-name=test-hybrid
#SBATCH --qos=np
#SBATCH --ntasks=128
#SBATCH --cpus-per-task=4
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=100
#SBATCH --output=test-hybrid.%j.out
#SBATCH --error=test-hybrid.%j.out
#SBATCH --chdir=/scratch...

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
srun my_mpi_openmp_app

See man sbatch or https://slurm.schedmd.com/sbatch.html for the complete set of options that may be used to configure a job.

  • No labels