Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updated information on ecFlow

...

QoS nameTypeSuitable for...Shared nodes Maximum jobs per userDefault / Max Wall Clock LimitDefault / Max CPUsDefault / Max Memory
tffractional

serial and small parallel jobs.

Yes-1 day / 1 day1 / 648 GB / 128 GB
tpparallelparallel jobs requiring more than half a nodeNo-6 hours / 1 day--

Ecflow settings

While the availability of virtual infrastructure to run ecFlow servers remains limited, you may start your ecFlow servers in the interim HPCF dedicated node to be able to run your suites, as detailed
Warninginfo

Please avoid running the ecFlow servers yourself on HPCF nodes. If you still have one, please get in touch with us through the ECMWF support portal to discuss your options.

The ecFlow servers will run on dedicated virtual machines very similar to the ones described in HPC2020: Using ecFlow.

...

 However, due to the special nature of the zid users, here are some remarks you should consider:

  • The name of the machine will be ecflow-tc2-zid-number. If you don't have a server yet, please raise an issue through the ECMWF support portal requesting one.
  • The HOME on the VM running the server is not the same as any of the two $HOMEs on HPCF, depending on the STHOST selected.
  • For your convenience, the ecFlow server's home can be accessed directly from any Atos HPCF node on /home/zid.  
  • You should keep the suite files (.ecf files and headers) on the ecFlow server's HOME, while using the native Lustre filesystems in the corresponding STHOST as working directories for your jobs. so it can
  • You may also need to run a log server on the HPCF (using the hpc-log node) depending on where your job output goes and where you run your ecflowUI if you need to inspect the job outputs.

For job submission and management, those ecflow VMs will Similarly to the general purpose ecFlow servers, these TC2 ecFlow servers come with "troika", the tool used in production at ECMWF to manage the submission, kill and status query for operational jobs. This tool is currently in process of being finalised for the Atos HPCF and will be made available together with the ecflow VM service.