Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. profile, utilities, sources, libraries, etc. 

Lustre

(on ws1 and ws2)

  • No Backup
  • No snapshots
  • No automatic deletion
  • Unthrottled I/O bandwidth
  • Not accessible from outside the HPCF

100GB

TCWORKpermanent large files. Main storage for your jobs and experiments input and output files.

Lustre

(on ws1 and ws2)

  • No Backup
  • No snapshots
  • No automatic deletion
  • Unthrottled I/O bandwidth
  • Not accessible from outside the HPCF

50 TB

SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.

Lustre

(on ws1 and ws2)

  • Created per session or job
  • Deleted at the end of session or job
  • Not accessible from outside the HPCF
part of TCWORK quota

TMPDIR


Fast temporary data for an individual session or job, small files only. Local to every node.

SSD on shared nodes 

(*f QoSs)

  • Created per session or job
  • Deleted at the end of session or job
  • Not accessible from outside the compute node


3 GB per session/job by default.

Customisable up to 40 GB with 

--gres=ssdtmp:<size>G


RAM on exclusive compute nodes 

(*p QoSs)

no limit (maximum memory of the node)

...

  • The name of the machine will be either ecflow-tc2-zid-number . I(old name scheme) or ecft-zid-number (new name scheme). If you don't have a server yet, please raise an issue through the ECMWF support portal requesting one.
  • The HOME on the VM running the server is not the same as any of the two $HOMEs on HPCF, depending on the STHOST selected.
  • For your convenience, the ecFlow server's home can be accessed directly from any Atos HPCF node on /home/zid.  
  • You should keep the suite files (.ecf files and headers) on the ecFlow server's HOME, while using the native Lustre filesystems in the corresponding STHOST as working directories for your jobs.
  • You may also want to use the server's HOME for your job standard output and error. That should make it easier when it comes to monitoring with ecFlowUI. Otherwise, you may also need to run a log server on the HPCF (using the hpc-log node) if your job output goes to a Lustre-based filesystem.

...