Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Files Location

For a maintenable operational suite, we recommand to:

  • start the server in a local /tmp directory (/:ECF_HOME)

  • define ECF_FILES and ECF_INCLUDE variables, at suites and families level: scripts wrappers will be accessible for creation, and update under these directories.

  • define /suite:ECF_HOME as another directory location, where jobs and related outputs will be found. These dynamic files may then be tar’ed as snapshot of the effective work associated to a suite/family, for later analysis or rerun.

  • when server and remote jobs destination do not share a common directory for output-files, ECF_OUT variable needs to be present in the suite definition: it indicates the remote output path. In this situation, the suite designer is responsible to create the directory structure where the output file will be found. Most queuing system won’t start the job, if this directory is absent, and the task may remain visible as submitted, from the ecFlow server side.

  • after sending the job complete command, the job may copy its output to ECF_NODEHOST, to enable direct access from ecFlow server.

    When a file is requested from the ecflow-server, it is limited to 15k lines, to avoid the server spending too much time delivering very large output files.

    ecFlowview can be configured (globally, Edit-Preferences-Server-Option or locally top-node-menu->Options “Read Output on other files from disk, when possible”) to get the best expected behaviour.

  • use ecf.list file to restrict access to the server for read-write or read only access

    Code Block
    #
    # ecflow_client --help=reloadwsfile
    # ecflow_client --reloadwsfile # update ecflow server
    # $USER  # rw access, aka $LOGNAME
    # -$USER # for read only access
    # export ECF_LISTS=/path/to/file # before server starts, to change location or name
    emos
    -rdx

...

Server Administration

an ‘admin’ suite will be required:

  • to ensure that ecflow logfile is not filling up the disk, nor touching a quota limit, issuing regularly the command:

    ecflow_client --port=%ECF_PORT% --host=%ECF_NODEHOST% --log=new
    
  • to duplicate the checkpoint file, on a remote, backup server, or a slower long term archive system. (to handle the case when disk failure, hosting workstation problem, or network issue that does require backup server start).

  • a white list file to control access for read-write users or read-only users

...