You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

If you find any problem or any feature missing that you think should be present, and it is not listed here, please let us know  by reporting as a "Problem on computing" through the ECMWF Support Portal mentioning "Atos" in the summary.

Atos HPCF is not operational platform yet, and many features or elements may be gradually added as complete setup is finalised. Here is a list of the known limitations, missing features and issues.

Missing Features

Comprehensive software stack

We have provided a basic software stack that should satisfy most users, but some software packages or libraries you require may not be present. If that is the case, let us know by reporting as a "Problem on computing" through the ECMWF Support Portal mentioning "Atos HPCF" in the summary.

End of job information

A basic report is provided at the end of the job with information about its execution.

[ECMWF-INFO -ecepilog] --------------------------------------------------------------------------------------------- 
[ECMWF-INFO -ecepilog] This is the ECMWF job Epilogue
[ECMWF-INFO -ecepilog] +++ Please report issues using the Support portal +++
[ECMWF-INFO -ecepilog] +++ https://support.ecmwf.int                     +++
[ECMWF-INFO -ecepilog] ---------------------------------------------------------------------------------------------
[ECMWF-INFO -ecepilog]
[ECMWF-INFO -ecepilog] Run at 2021-09-28T06:21:25 on aa
[ECMWF-INFO -ecepilog] Job Name                  : eci
[ECMWF-INFO -ecepilog] Job ID                    : 1009559
[ECMWF-INFO -ecepilog] Submitted                 : 2021-09-28T06:05:23
[ECMWF-INFO -ecepilog] Dispatched                : 2021-09-28T06:05:23
[ECMWF-INFO -ecepilog] Completed                 : 2021-09-28T06:21:25
[ECMWF-INFO -ecepilog] Waiting in the queue      : 0.0
[ECMWF-INFO -ecepilog] Runtime                   : 962
[ECMWF-INFO -ecepilog] Exit Code                 : 0:0
[ECMWF-INFO -ecepilog] State                     : COMPLETED
[ECMWF-INFO -ecepilog] Account                   : myaccount
[ECMWF-INFO -ecepilog] Queue                     : nf
[ECMWF-INFO -ecepilog] Owner                     : user
[ECMWF-INFO -ecepilog] STDOUT                    : slurm-1009559.out
[ECMWF-INFO -ecepilog] STDERR                    : slurm-1009559.out
[ECMWF-INFO -ecepilog] Nodes                     : 1
[ECMWF-INFO -ecepilog] Logical CPUs              : 8
[ECMWF-INFO -ecepilog] SBU                       : 20.460 units
[ECMWF-INFO -ecepilog]
  • There is no charge made to the project accounts for any SBUs used on the Atos HPCF system until it becomes operational.
  • We are unable to provide a figure for the memory used at this time.

Alternatively, you may use sacct to get some of the statistics from SLURM once the job has finished.

Connectivity

  • Direct access to the Atos HPCF through ECACCESS or Teleport is not yet available. See HPC2020: How to connect for more information.
  • SSH connections to/from VMs in Reading running ecFlow servers are not available. For more details on ecFlow usage, see HPC2020: Using ecFlow.
  • Load balancing between Atos HPCF interactive login nodes is not ready yet. When implemented, an ssh connection into the main alias for the HPCF may create a session in an arbitrary login node.

Filesystems

The select/delete policy in SCRATCH has not been enforced yet.

See HPC2020: Filesystems for all the details.

prepIFS

The prepIFS environment for running IFS experiments on Atos is still under development, and is not yet ready for general use. A further announcement will be forthcoming where users will be invited to start running prepIFS experiments on Atos and migrate their workflow.

ECACCESS and Time-Critical Option 1 (and 2) features

The ECACCESS web toolkit services, such as the job submission, including Time-Critical Option 1 jobs, file transfers and ectrans have not been set up yet to use the Atos HPCF.

Time-Critical Option 2 users enjoy a special setup with additional redundancy in terms of filesystems to minimise the impacts of failures or planned maintenances. However, this has not been finalised yet so we would recommend not to start using these accounts until the configuration is complete.

ECFlow service

While the availability of virtual infrastructure to run ecFlow servers remains limited, you may start your ecFlow servers in the interim HPCF dedicated node to be able to run your suites. 

At a later stage, those ecFlow servers will need to be moved to dedicated Virtual Machines outside the HPCF, where practically no local tasks will be able to run. All ecFlow tasks will need to be submitted to one of the HPCF complexes through the corresponding Batch system.

Please do keep that in mind when migrating or designing your solution.

See HPC2020: Using ecFlow for all the details.

Known issues

Intel MKL greater than 19.0.5: performance issues on AMD chips

Recent versions of MKL do not use the AVX2 kernels for certain operations on non-intel chips, such as the AMD Rome on our HPCF. The consequence is a significant drop in performance.

  • No labels