Option 2: Member State ecFlow suites monitored by ECMWF:
Option 3: Member State ecFlow suites managed by ECMWF:
The main characteristics of Member State time-critical work are:
Within this Framework, Member State users can use the Atos HPCF and ECGATE services (HPCF resources are needed to use the HPCF / HPC servce). In general, users should minimise the number of systems they use. For example, they should use the ECGATE (ECS) service only if they need to post-process data, which is not excessively computationally intensive. Similarly, they should only use the HPCF (HPC) if they need to run computationally intensive work (e.g. a numerical model) and do not need to post-process their output graphically before it is transferred to their Member State. Member State time-critical work may also need to use additional systems outside ECMWF after having done some processing at ECMWF: for example to run other models using data produced by their work at ECMWF. It is not the purpose of this document to provide guidelines on how to run work which does not make use of ECMWF computing systems.
Every registered user of ECMWF computing systems is allowed to run work using "option 1" of this Framework and no formal request is required. Note that the access to our realtime operational data is restricted. Users interested in running this kind of work should refer to the document entitled "Simple time-critical jobs - ECaccess". See http://software.ecmwf.int/wiki/display/USS/Simple+time-critical+jobs. To run work using option 2" or option 3" you will need to submit an official request to the Director of Forecasting Department at ECMWF, signed by the TAC representative of your Member State. Before submitting a request we advise you to discuss the time-critical work you intend to run at ECMWF with your User Support contact point. Your official request will need to provide the following information:
ECMWF will consider your request and reply officially within three months taking into account, in particular, the resources required for the implementation of your request.
As this work will be monitored by ECMWF staff (User Support during the development phase; the operators, once your work is fully implemented), the only practical option is to implement time-critical work using a suite under ecFlow, ECMWF's monitoring and scheduling software packages. .The suite must be developed according to the technical guidelines provided in this document. General documentation, training course material, etc ... on ecFlow can be found at ecflow home. No on-call support will be provided by ECMWF staff but the ECMWF operators can contact the relevant Member State suite support person, if this is clearly requested in the suite man pages.
In this case, the Member State ecFlow suite will be managed by ECMWF. The suite will usually be based on either a suite previously developed in the framework of "option 2" or on a similar suite already used to run ECMWF operational work. The suite will be run using the ECMWF operational userid and will be managed by staff in the production section at ECMWF. The suite will generally be developed following similar guidelines to "option 2". The main technical differences are that option 3" will have higher batch scheduling priority than "option 2" work and will also benefit from first level on call support from the ECMWF Production Section staff.
You are advised to discuss the requirements of your work with User Support before you start any implementation. You should test a first rough implementation under your normal Member State userid, using the file systems normally available to you and standard batch job classes/queues, etc ... following the technical guidelines given in this document. You should proceed with the final implementation only after your official request has been agreed by ECMWF.
A specific UID will be created to run a particular suite under "option 2". This UID will be set up as an "application identifier": such UIDs start with a "z", followed by two or three characters. No password will be assigned and access to the UID will be allowed using a strong authentication token (ActivIdentity token). A person responsible will be nominated for every "application identifier" UID. A limited number of other registered users can also be authorized to access this UID and a mechanism to allow such access under strict control will be available. The person associated with the UID and other authorised users have responsibility for all changes made to the files owned by the UID. The UID will be registered with a specific "policy" ("timecrit") which allows access to restricted batch classes, restricted file systems.
As mentioned earlier, "option 2" work must be implemented, unless otherwise previously agreed, by developing an ecFlow suite. The ecFlow environment is not set up by default for users the Atos HPCF and ECGATE systems. Users will have to load the ecFlow environment with a module: module load ecflow. ECMWF will create a ready-to-go ecFlow server running on an independent Virtual Machine outside the HPCF. See also Using ecFlow for further information about using ecFlow on the Atos HPCF and ECGATE systems.
The ecFlow port number for the suite will be the default 3141 and the host will be the Virtual Machine and will have a hostname of the form ecflow-tc2-<UID>-001. The ecFlow server will run on Virtual Machine and will be started at system boot time. You should not need to SSH into the server unless there is a problem. If the server died for some reason, it should be restarted automatically, but if it does not, you may restart it manually with:
$ ssh $ECF_HOST sudo systemctl restart ecflow-server |
Depending on your activity with ecFlow, the ecFlow log file (/home/$USER/ecflow_server/ecflow-tc2-$USER-001.log) will grow steadily. We recommend that you install either a cron job or an administration task in your suite to clean these ecFlow log files. This can be achieved with the ecflow_client command:
$ ecflow_client --help log log --- Get,clear,flush or create a new log file. The user must ensure that a valid path is specified. Specifying '--log=get' with a large number of lines from the server, can consume a lot of **memory**. The log file can be a very large file, hence we use a default of 100 lines, optionally the number of lines can be specified. arg1 = [ get | clear | flush | new | path ] get - Outputs the log file to standard out. defaults to return the last 100 lines The second argument can specify how many lines to return clear - Clear the log file of its contents. flush - Flush and close the log file. (only temporary) next time server writes to log, it will be opened again. Hence it best to halt the server first new - Flush and close the existing log file, and start using the the path defined for ECF_LOG. By changing this variable a new log file path can be used Alternatively an explicit path can also be provided in which case ECF_LOG is also updated path - Returns the path name to the existing log file arg2 = [ new_path | optional last n lines ] if get specified can specify lines to get. Value must be convertible to an integer Otherwise if arg1 is 'new' then the second argument must be a path Usage: --log=get # Write the last 100 lines of the log file to standard out --log=get 200 # Write the last 200 lines of the log file to standard out --log=clear # Clear the log file. The log is now empty --log=flush # Flush and close log file, next request will re-open log file --log=new /path/to/new/log/file # Close and flush log file, and create a new log file, updates ECF_LOG --log=new # Close and flush log file, and create a new log file using ECF_LOG variable The client reads in the following environment variables. These are read by user and child command |----------|----------|------------|-------------------------------------------------------------------| | Name | Type | Required | Description | |----------|----------|------------|-------------------------------------------------------------------| | ECF_HOST | <string> | Mandatory* | The host name of the main server. defaults to 'localhost' | | ECF_PORT | <int> | Mandatory* | The TCP/IP port to call on the server. Must be unique to a server | | ECF_SSL | <any> | Optional* | Enable encrypted comms with SSL enabled server. | |----------|----------|------------|-------------------------------------------------------------------| * The host and port must be specified in order for the client to communicate with the server, this can be done by setting ECF_HOST, ECF_PORT or by specifying --host=<host> --port=<int> on the command line |
For example, to empty the log file, use:
ecflow_client --port=%ECF_PORT% --host=%ECF_HOST% --log=clear |
For information about using crontabs on the Atos HPCF and ECGATE service, please see Cron service. Note in particular that your crontab should be installed on either ecs-cron or hpc-cron.
We recommend that the job output files are stored on the Lustre $TCWORK file system. As this file system cannot be directly accessed from the ecFlow Virtual Machine, we recommend the usage of the simple log server (Perl script) to access the output files of jobs running on the HPCF. This log server requires another port number, which will have the format "35000+<UID>", where <UID> is the numeric uid of the userid used to run the work. The log server will run on the hpc-log node of the Atos HPCF. The ecflow_logserver.sh command should be used to start the log server on the HPCF. The syntax of the command ecflow_logserver.sh is:
$ ecflow_logserver.sh -h Usage: /usr/bin/ecflow_logserver.sh [-d <dir>] [-m <map>] [-l <logfile>] [-h] -d <dir> specify the directory name where files will be served from - default is $HOME -m <map> gives mapping between local directory and directory where ecflow server runs - default is <dir>:<dir> -l <logfile> logserver log file - default is $SCRATCH/log/logfile -h print this help page Example: start_logserver.sh -d %ECF_OUT% -m %ECF_HOME%:%ECF_OUT% -l logserver.log |
The mapping can consist of a succession of mappings. Each individual mapping will first give the directory name on the ecFlow server, followed by the directory name on the HPC system, like in the following example:
-m <dir_ecflow_vm>:<dir1_hpc>:<dir_ecflow_vm:<dir2_hpc> |
We recommend that you implement a cron job or define an administration task in your suite to check the presence of the log server process. The above script ecflow_logserver.sh can be used for this purpose. The logserver should be started on hpc-log.
EcFlow will manage your jobs. Three main actions on the ecFlow tasks are required: one to submit, one to check and one to kill a task. These three actions are respectively defined through the ecFlow variables ECF_JOB_CMD, ECF_KILL_CMD and ECF_STATUS_CMD. You can use any script to take these actions on your tasks. We recommend that you use the troika provided by ECMWF. The "troika" command is installed on your ecFlow Virtual Machine and can be used to submit, check or kill a task:
$ troika -h usage: troika [-h] [-V] [-v] [-q] [-l LOGFILE] [-c CONFIG] [-n] action ... Submit, monitor and kill jobs on remote systems positional arguments: action perform this action, see `troika <action> --help` for details submit submit a new job monitor monitor a submitted job kill kill a submitted job check-connection check whether the connection works list-sites list available sites optional arguments: -h, --help show this help message and exit -V, --version show program's version number and exit -v, --verbose increase verbosity level (can be repeated) -q, --quiet decrease verbosity level (can be repeated) -l LOGFILE, --logfile LOGFILE save log output to this file -c CONFIG, --config CONFIG path to the configuration file -n, --dryrun if true, do not execute, just report environment variables: TROIKA_CONFIG_FILE path to the default configuration file |
We recommend you set your ecFlow variables to use troika as follows:
ECF_JOB_CMD="troika -vv submit -u %USER% -o %ECF_JOBOUT% %HOST% %ECF_JOB%" ECF_KILL_CMD="troika kill -u %USER% %HOST% %ECF_JOB%" ECF_STATUS_CMD="troika monitor -u %USER% %HOST% %ECF_JOB%" |
EcFlow allows access to be restricted to one ecFlow server, using ecf.list file in $ECF_HOME. We recommend that you set up and use this file, mainly to allow ECMWF staff to monitor your suite and to prevent unintentional access by other users. A sample file is available in ~usx/time_critical/ecf.list.
Some key points to keep in mind when designing your suite:
The worst that could happen is that you lose everything and need to restart from scratch. Although this is very unlikely, you should keep safe copies of your libraries, executables and other constant data files. To achieve flexibility in the configuration of your suite, we recommend that you have one core suite and define ecFlow variables for all those changes of configuration you want to cater for. See variable definitions in suite definition file ~usx/time_critical/sample_suite.def.
All tasks that are part of the critical path, i.e. that will produce the final ”products” to be used by you, have to run in the safest environment:
One key point in the successful communication between the jobs running on the HPCF systems and your ecFlow server is the error handling. We recommend the use of a trap, as illustrated in the sample suite in ~usx/time_critical/include/head.h. The shell script run by your batch job should also use the ”set -ue” options.
A sample suite illustrating the previous recommendation is available in ~usx/time_critical/sample_suite.def.
File systems have been set-up on the HPCF clusters for the UID which will be used to run the time critical applications: they are called /ec/ws1 and /ec/ws2 on the current Atos HPC system. These file systems are quota controlled and therefore you will need to provide User Support with an estimate of the total size and number of files which you need to keep on this file system.
This file system should be used to hold both your binaries/libraries and input and output data.
No select/delete process will run on this file system and you will be required to regularly remove any unnecessary files as part of your suite.
You will also be required to safely backup copies of your binaries, etc into ECFS. It is recommended to include a task at the beginning of your suite, not to be run every day, that will restore your environment, in case of emergency (”restart from scratch” functionality).
If there is a need for a file system with different characteristics (e.g. to hold safely on line files for several days), these requirements can be discussed with User Support and a file system with the required functionalities can be made available.
Specific batch job queues have been set up on the HPCF clusters with access restricted to the UIDs authorised to run ”option 2” work only. They are called ”tf” and ”tp”, respectively for sequential or fractional work (work using less than half of one node) and parallel work (work using more than 1 node). There are the queues you should use to run any time-critical work on the Atos HPCF. If there are any non time-critical tasks in your suite (e.g. archiving tasks), these can use the other queues normally available to users. Archiving tasks should always use the nf queue and not be included as part of your parallel work.
When you develop or test a new version of your time-critical suite, we advise you to use the standard classes or queues available to all users. In this way, your time-critical activity will not be delayed by these testing or developments.
Your work will normally require some ”input” data before processing can start. The following possibilities have been considered:
If produced by ECMWF, your required data will also be available in the FDB as soon as the relevant model has produced them and will remain online for a limited (variable depending on the model) amount of time. You can access these data using the usual ”mars” command. If your suite requires access to data which may no longer be contained in the FDB (e.g. EPS model level data from previous EPS runs) then your suite needs to access these data before they are removed from the FDB and temporarily store them in one of your disk storage areas.
For no reason should any of your time-critical suite tasks depend on data only available from the Data Handling System (MARS archive or ECFS). Beware that the usage of the parameter ALL in any mars request will automatically redirect it to the MARS archive (DHS). Note also that we recommend you do not use abbreviations for a verb, parameter or value in your mars requests. If too short, these abbreviations may become ambiguous if a new verb, parameter or value name is added to the mars language.
Your work requires input data which is neither produced by any of the ECMWF models nor available at ECMWF. You will then be responsible for setting up the required ”acquisition” tasks and establish their level of time criticality. For example, your suite may need some additional observations which improve the quality of your assimilation but your work can also run without them in case there is a delay/problem in their arrival at ECMWF. Please see the section ”Data transfers” for advice on how to transfer incoming data.
We recommend the use of the ectrans command to send data to remote sites. The command has recently been enhanced to include a retrial of transfers from the spool.
We recommend set up the ectrans remote associations on your local ECaccess gateway. If this is not available, you can set up remote associations on the ECaccess gateway at ECMWF.
Note that, by default, ectrans transfers are asynchronous; the successful completion of the ectrans command does not mean your file has been transferred successfully. You may want to use the option ”-put” to request synchronous transfers.
More recently, we have enhanced the ECMWF dissemination system (ECPDS) to allow the dissemination of data produced by option 2 suites to remote sites. This option is more robust than ectrans and now recommended for time-critical 2 work. Please contact User Support for more information on this.
We recommend ectrans (option -get) to upload some data from a remote site to ECMWF. Other options, including the possible use of ECPDS, may be considered in specific situations. Please discuss these with User Support.
The ECGATE (ECS) and the HPCF (ECS) systems share the same file systems so there should be no need to transfer data between them. If you need to transfer data to other systems at ECMWF then we recommend that you use rsync We remind you not to use the DHS (MARS or ECFS) for any tasks in the critical path.
The UIDs authorised to run ”option 2” work will be given higher batch job scheduling priority than normal Member State work. All ”option 2” UIDs will be given the same priority. The Centre’s core operational activities will always be given higher priority than MS time-critical activities. If problems affecting the core activity suites arise or long system sessions are needed, the Member State suites and jobs may be delayed or possibly abandoned. Member State users may need to consider the possibility of setting up suitable backup procedures for such eventualities.
The UIDs authorised to run ”option 2” work have access to all Atos HPCF complexes and are advised to implement their suite so they are ready to run on the cluster they normally do not use if the primary cluster is unavailable for an extended period of time.
The two separate HPCF environments (currently only the /ec/ws1 and /ec/ws2 file systems) should be kept regularly synchronised using utilities such as ”rsync”.
It should be possible to change the HPCF cluster used by the suite by doing a simple change of ecFlow variable (variable SCHOST in the sample suite).
Similarly it is desirable to change the file system used by the suite by changing an ecFlow variable (variable STHOST in the sample suite). Users may wish to consider the possibility of setting up more sophisticated backup procedures such as the regular creation of backup products based on previous runs of the suite.
Users wishing to set up Member State time-critical suites at the Centre should carefully consider their requirements regarding the long term storage of the products of their suite.
In particular, they should consider if they want to archive their time-critical application’s output data in MARS. In such a case, users are advised to contact their User Support contact point to start discussing the relevant technical issues which are beyond the scope of this document. The same recommendation applies to users wishing to consider the storage of their suite’s output in the on-line FDB (ECMWF’s Fields Data Base). If your model/application producing the suite’s output is already enabled to store into the FDB, then the disk space to be used is /sc1/tcwork/ms_fdb or /sc2/tcwork/ms_fdb, e.g. FDB_ROOT=/sc1/tcwork/ms_fdb on each cluster.
For most users we recommend that their time-critical application’s output data is stored in the ECFS system, if this is required.
Please note that no time-critical task in your suite should depend on the completion of an archiving task.
Please also note that possible users of your suite’s products should be advised not to depend on the availability of such products in any part of the DHS system (both ECFS and MARS archive), as its services offered can be unavailable for several hours.
Once your suite at option 2 is declared running in time-critical mode, we recommend you not to touch this suite any longer for new developments. We recommend that you define a similar suite in parallel to the time-critical one and that you first do the testing of changes under this suite. When you make important changes to your suite, we recommend that you inform ECMWF via the Support Portal.
At ECMWF, we will set up the appropriate information channels to keep you aware of changes that may affect your time-critical activity. The most appropriate tool is a mailing list.
We welcome any comments on this document and the framework for time-critical applications. In particular, please do let us know if any of the above general purpose scripts does not fit with your requirements. We will then try to incorporate the changes needed.