Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: ensuring STHOST is passed to submitting shell

...

Table of Contents
maxLevel1

Special filesystems

Warning
titleWS1 availability

WS1 is not available yet. Meanwhile, please use ws2 only 

Time critical option 2 users, or zids, have a special set of filesystems different from the regular user. They are served from different storage servers in different computing halls, and are not kept in sync automatically. It is the user's responsibility to ensure the required files and directory structures are present on both sides and synchronise them if and when needed. This means, for example, that zids will have 2 HOMES, one on each Storage Host. All the following storage locations can be referenced by the corresponding environment variables, which will be defined automatically for each session or job.  

...

The Storage server to use is controlled by the environment variable STHOST, which may take the values "ws1" or "ws2". This variable needs to be defined when logging in, and also for all the jobs that need to run in batch. If logging in interactively without passing the environment variable, you will be prompted to choose the desired STHOST:

No Format

WARNING: ws1 is not currently available.
1) ws1
2) ws2
Please select the desired timecrit storage set for $STHOST: 2

##### # #    # ######  ####  #####  # #####
  #   # ##  ## #      #    # #    # #   #
  #   # # ## # #####  #      #    # #   #
  #   # #    # #      #      #####  #   #
  #   # #    # #      #    # #   #  #   #
  #   # #    # ######  ####  #    # #   #


#    #  ####  ###### #####     ###### #      #    #
#    # #      #      #    #        #  #      #    #
#    #  ####  #####  #    #       #   #      #    #
#    #      # #      #####       #    #      #    #
#    # #    # #      #   #      #     #      #    #
 ####   ####  ###### #    #    ###### ######  ####

[ECMWF-INFO -ecprofile] /usr/bin/ksh93 INTERACTIVE on aa6-100 at 20220207_152402.512, PID: 53964, JOBID: N/A                                                                                                       
[ECMWF-INFO -ecprofile] $HOME=/ec/ws2/tc/zlu/home=/lus/h2tcws01/tc/zlu/home
[ECMWF-INFO -ecprofile] $TCWORK=/ec/ws2/tc/zlu/tcwork=/lus/h2tcws01/tc/zlu/tcwork
[ECMWF-INFO -ecprofile] $SCRATCHDIR=/ec/ws2/tc/zlu/scratchdir/4/aa6-100.53964.20220207_152402.512
[ECMWF-INFO -ecprofile] $TMPDIR=/etc/ecmwf/ssd/ssd1/tmpdirs/zlu.53964.20220207_152402.512

...

Because "#SBATCH --export" option doesn't work with a simple sbatch submission on the Atos HPC, ecsbatch ("/usr/local/bin/ecsbatch") command must be used instead. Troika is configured to use ecsbatch by default.

Tip
titlesbatch command line option

Like any other SBATCH directive, you may alternatively pass the export in the sbatch command line instead:

No Format
sbatch --export=STHOST=ws2 job.sh

...



Note
titleksh and all ecFlow jobs special requirement

Make sure you include this line right after the SBATCH directives header:

Code Block
source /etc/profile


Remote submission from ecFlow

When submitting jobs from ecFlow, you should ensure the STHOST variable is properly passed int the ssh connection.

If using troika, you should ensure that your job management variables export the variable before calling troika:

Code Block
languagebash
titleJob management variables in your suite.def
edit ECF_JOB_CMD STHOST=%STHOST% troika submit -o %ECF_JOBOUT% %SCHOST% %ECF_JOB%
edit ECF_KILL_CMD STHOST=%STHOST% troika kill %SCHOST% %ECF_JOB%
edit ECF_STATUS_CMD STHOST=%STHOST% troika monitor %SCHOST% %ECF_JOB%

If not using troika, make sure you pass the STHOST environment variable to the submitting shell:

Code Block
languagebash
titleJob management variables in your suite.def
edit ECF_JOB_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...
edit ECF_KILL_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...
edit ECF_STATUS_CMD STHOST=%STHOST% ssh -o SendEnv=STHOST tc-login ...

High-priority batch access

...