You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

If you find any problem or any feature missing that you think should be present, and it is not listed here, please let us know  by reporting as a "Problem on computing" through the ECMWF Support Portal mentioning "Atos HPCF" in the summary.

Atos HPCF is not operational platform yet, and many features or elements may be gradually added as complete setup is finalised. Here is a list of the known limitations, missing features and issues.

Missing Features

Comprehensive software stack

We have provided a basic software stack that should satisfy most users, but some software packages or libraries you require may not be present. If that is the case, let us know by reporting as a "Problem on computing" through the ECMWF Support Portal mentioning "Atos HPCF" in the summary.

End of job information

A basic report is provided at the end of the job with information about its execution.

## INFO ---------------------------------------------------------------------------------------------
## INFO  This is the ECMWF job Epilogue. Please report problems to ServiceDesk, servicedesk@ecmwf.int
## INFO ---------------------------------------------------------------------------------------------
## INFO
## INFO Run at 2021-09-28T06:21:25 on aa
## INFO Job Name                  : eci
## INFO Job ID                    : 1009559
## INFO Submitted                 : 2021-09-28T06:05:23
## INFO Dispatched                : 2021-09-28T06:05:23
## INFO Completed                 : 2021-09-28T06:21:25
## INFO Waiting in the queue      : 0.0
## INFO Runtime                   : 962
## INFO Exit Code                 : 0:0
## INFO State                     : COMPLETED
## INFO Account                   : myaccount
## INFO Queue                     : nf
## INFO Owner                     : user
## INFO STDOUT                    : slurm-1009559.out
## INFO STDERR                    : slurm-1009559.out
## INFO Nodes                     : 1
## INFO Logical CPUs              : 8
## INFO SBU                       : 20.460 units
## INFO

  • There is no charge made to the project accounts for any SBUs used on the Atos HPCF system until it becomes operational.
  • We are unable to provide a figure for the memory used at this time.

Alternatively, you may use sacct to get some of the statistics from SLURM once the job has finished.

SSD disks on GPIL nodes

The GPIL nodes have local SSDs with some 950GB capacity. These have not been mounted yet, as we need to develop a service model for their use.

Known issues

Intel MKL > 19.0.5 performance issues on AMD chips

Recent versions of MKL do not use the AVX2 kernels for certain operations on non-intel chips, such as the AMD Rome on TEMS. The consequence is a significant drop in performance.

  • No labels