Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Atos HPCF consists of four virtually identical complexes: AA, AB, AC and AD. In total, this HPCF features 8128 nodes:

  • 7680 compute
Warning

TEMS is to be retired at the end of October 2021. See more information here .

The system is composed of 60 nodes:

  • 40 compute nodes, for parallel jobs
  • 20 GPIL 448 GPIL (General Purpose and Interactive Login) nodes, which are devised to integrate the interactive and post-processing work that is being currently done on from older platforms such as the Cray HPCF, ECGATE and Linux Clusters.

Each node on TEMS comes with:

  • 2 x AMD EPYC Rome 7H12
  • 512 GiB memory

...

Info
titleAccess for Cooperating States users and those from Member States with no formal HPCF privileges

There is an additional virtual complex, ECS, which is made from compute nodes of the 4 complexes. This is the one to be used by those users who don't have access to the full HPCF service.

The logic structure of one of the complexes is as follows:

Image Added

...

Main differences with the previous Cray XC40 system

The most notorious change is on notable change with respect to the previous Cray XC40 HPCF is in the processor architecture, from Intel to AMD. Although both implement the x86_64 instruction set,   the latter has many more cores. 

TEMS Infiniband Lustre

Cray (CCA/CCB)Atos (AA/AB/AC/AD)Atos future main systems
CPU ArchitectureIntel BroadwellAMD EPYC RomeAMD EPYC Rome

Core Frequency

2.1 GHz2.6 GHz2.5 GHz (GPIL) / 2.25 GHz (compute)
Cores per node (HT)36 (72)128 (256)128 (256)
Memory per node128 GiB512 GiB512 GiB (GPIL) / 256 GiB (compute)
Fabric interconnectCray AriesMellanox HDR InfiniBand - 100 Gbps Mellanox HDR Infiniband - 100 Gbps (GPIL) / 200 Gbps (compute)
Operating SystemBased on SLES 11Based on RHEL 8Based on RHEL 8
Batch systemPBS with ALPSSLURMSLURM
Parallel FilesystemLustreLustre
CompilersCray, GNU, IntelGNU, Intel, AOCCGNU, Intel, AOCC
MPICray MPICHIntel MPI, OpenMPIIntel MPI, OpenMPI