Under the Framework for time-critical applications Member States can run ecFlow suites monitored by ECMWF. Known as the option 2 within that framework, they enjoy a special technical setup to maximise the robustness and high availability similar to ECMWF's own operational production. When moving from a standard user account to a time-critical one (typically starting with a "z" followed by two or three characters) there are a number of things you must be aware of.

Special filesystems

Time critical option 2 users, or zids, have a special set of filesystems different from the regular user. They are served from different storage servers in different computing halls, and are not kept in sync automatically. It is the user's responsibility to ensure the required files and directory structures are present on both sides and synchronise them if and when needed. This means, for example, that zids will have 2 HOMES, one on each Storage Host. All the following storage locations can be referenced by the corresponding environment variables, which will be defined automatically for each session or job.  

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. profile, utilities, sources, libraries, etc. 

Lustre

(on ws1 and ws2)

  • No Backup
  • No snapshots
  • No automatic deletion
  • Unthrottled I/O bandwidth
  • Not accessible from outside the HPCF

100GB

TCWORKpermanent large files. Main storage for your jobs and experiments input and output files.

Lustre

(on ws1 and ws2)

  • No Backup
  • No snapshots
  • No automatic deletion
  • Unthrottled I/O bandwidth
  • Not accessible from outside the HPCF

50 TB

SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.

Lustre

(on ws1 and ws2)

  • Created per session or job
  • Deleted at the end of session or job
  • Not accessible from outside the HPCF
part of TCWORK quota

TMPDIR


Fast temporary data for an individual session or job, small files only. Local to every node.

SSD on shared nodes 

(*f QoSs)

  • Created per session or job
  • Deleted at the end of session or job
  • Not accessible from outside the compute node


3 GB per session/job by default.

Customisable up to 40 GB with 

--gres=ssdtmp:<size>G


RAM on exclusive compute nodes 

(*p QoSs)

no limit (maximum memory of the node)

Note that there is no PERM or SCRATCH, and the corresponding environment variables will not be defined.

Selecting the STHOST

The Storage server to use is controlled by the environment variable STHOST, which may take the values "ws1" or "ws2". This variable needs to be defined when logging in, and also for all the jobs that need to run in batch. If logging in interactively without passing the environment variable, you will be prompted to choose the desired STHOST:

WARNING: ws1 is not currently available.
1) ws1
2) ws2
Please select the desired timecrit storage set for $STHOST: 2

##### # #    # ######  ####  #####  # #####
  #   # ##  ## #      #    # #    # #   #
  #   # # ## # #####  #      #    # #   #
  #   # #    # #      #      #####  #   #
  #   # #    # #      #    # #   #  #   #
  #   # #    # ######  ####  #    # #   #


#    #  ####  ###### #####     ###### #      #    #
#    # #      #      #    #        #  #      #    #
#    #  ####  #####  #    #       #   #      #    #
#    #      # #      #####       #    #      #    #
#    # #    # #      #   #      #     #      #    #
 ####   ####  ###### #    #    ###### ######  ####

[ECMWF-INFO -ecprofile] /usr/bin/ksh93 INTERACTIVE on aa6-100 at 20220207_152402.512, PID: 53964, JOBID: N/A                                                                                                       
[ECMWF-INFO -ecprofile] $HOME=/ec/ws2/tc/zlu/home=/lus/h2tcws01/tc/zlu/home
[ECMWF-INFO -ecprofile] $TCWORK=/ec/ws2/tc/zlu/tcwork=/lus/h2tcws01/tc/zlu/tcwork
[ECMWF-INFO -ecprofile] $SCRATCHDIR=/ec/ws2/tc/zlu/scratchdir/4/aa6-100.53964.20220207_152402.512
[ECMWF-INFO -ecprofile] $TMPDIR=/etc/ecmwf/ssd/ssd1/tmpdirs/zlu.53964.20220207_152402.512

You can avoid that prompt by passing the environment variable with the desired value:

STHOST=ws2 ssh -o SendEnv=STHOST hpc-login

Batch Jobs

When submitting jobs, you must ensure you pass the desired STHOST to your jobs with the corresponding SBATCH export directive. For example, to select ws2 in the job script you need to add:

#SBATCH --export=STHOST=ws2

Because "#SBATCH --export" option doesn't work with a simple sbatch submission on the Atos HPC, ecsbatch ("/usr/local/bin/ecsbatch") command must be used instead. Troika is configured to use ecsbatch by default.

sbatch command line option

Like any other SBATCH directive, you may alternatively pass the export in the sbatch command line instead:

sbatch --export=STHOST=ws2 job.sh

ksh and all ecFlow jobs special requirement

Make sure you include this line right after the SBATCH directives header:

source /etc/profile

Remote submission from ecFlow

When submitting jobs from ecFlow, you should ensure the STHOST variable is properly passed int the ssh connection.

If using troika, you should ensure that your job management variables export the variable before calling troika:

Job management variables in your suite.def
edit ECF_JOB_CMD STHOST=%STHOST% troika submit -o %ECF_JOBOUT% %SCHOST% %ECF_JOB%
edit ECF_KILL_CMD STHOST=%STHOST% troika kill %SCHOST% %ECF_JOB%
edit ECF_STATUS_CMD STHOST=%STHOST% troika monitor %SCHOST% %ECF_JOB%

If not using troika, make sure you pass the STHOST environment variable to the submitting shell:

Job management variables in your suite.def
edit ECF_JOB_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...
edit ECF_KILL_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...
edit ECF_STATUS_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...

High-priority batch access

As a zid, you will be able to access the "t*" time-critical QoSes which have a higher priority than their standard "n*" counterparts:

QoS nameTypeSuitable for...Shared nodes Maximum jobs per userDefault / Max Wall Clock LimitDefault / Max CPUsDefault / Max Memory
tffractional

serial and small parallel jobs.

Yes-1 day / 1 day1 / 648 GB / 128 GB
tpparallelparallel jobs requiring more than half a nodeNo-6 hours / 1 day--

Ecflow settings

Please avoid running the ecFlow servers yourself on HPCF nodes. If you still have one, please get in touch with us through the ECMWF support portal to discuss your options.

The ecFlow servers will run on dedicated virtual machines very similar to the ones described in HPC2020: Using ecFlow. However, due to the special nature of the zid users, here are some remarks you should consider:

  • The name of the machine will be either ecflow-tc2-zid-number (old name scheme) or ecft-zid-number (new name scheme). If you don't have a server yet, please raise an issue through the ECMWF support portal requesting one.
  • The HOME on the VM running the server is not the same as any of the two $HOMEs on HPCF, depending on the STHOST selected.
  • For your convenience, the ecFlow server's home can be accessed directly from any Atos HPCF node on /home/zid.  
  • You should keep the suite files (.ecf files and headers) on the ecFlow server's HOME, while using the native Lustre filesystems in the corresponding STHOST as working directories for your jobs.
  • You may also want to use the server's HOME for your job standard output and error. That should make it easier when it comes to monitoring with ecFlowUI. Otherwise, you may also need to run a log server on the HPCF (using the hpc-log node) if your job output goes to a Lustre-based filesystem.

Similarly to the general purpose ecFlow servers, these TC2 ecFlow servers come with "troika", the tool used in production at ECMWF to manage the submission, kill and status query for operational jobs.