Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. profile, utilities, sources, libraries, etc. NFS

It is backed up.

Snapshots available. See HPC2020: Recovering data from snapshots

Throttled I/O bandwidth from parallel compute nodes (less performance)


Show If
groupecmwf
  • 100GB for ECMWF staff
  • 10 GB for Member State users

Hide If
groupecmwf

10GB


PERMpermanent files without the need for automated backups, smaller input files for serial or small processing, etc.

NFS

No backup

Snapshots available.  See HPC2020: Recovering data from snapshots

Throttled I/O bandwidth from parallel compute nodes (less performance)

DO NOT USE IN PARALLEL APPLICATIONS

DO NOT USE FOR JOB STANDARD OUTPUT/ERROR


Show If
groupecmwf
  • 10 TB for ECMWF staff
  • 500 GB for Member State users


Hide If
groupecmwf

500 GB


HPCPERMpermanent files without the need for automated backups,  bigger input files for parallel model runs, climate files, std output, etc.Lustre

No Backup

No snapshots

No automatic deletion


Show If
groupecmwf
  • 1 TB for ECMWF staff
  • 100 GB for Member State users without HPC access
  • 1 TB for Member State users with HPC access


Hide If
groupecmwf
  • 100 GB for users without HPC access
  • 1 TB for users with HPC access


SCRATCHall temporary (large) files. Main storage for your jobs and experiments input and output files.Lustre

Automatic deletion after 30 days of last access - implemented since 27 March 2023

No snapshots

No backup


Show If
groupecmwf
  • 50 TB for ECMWF staff and Member State users with HPC access
  • 2 TB for users without HPC access


Hide If
groupecmwf
  • 50 TB for users with HPC access
  • 2 TB for users without HPC access


SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.Lustre

Deleted at the end of session or job

Created per session/ job as a subdirectory in SCRATCH

part of SCRATCH quota

TMPDIR

Fast temporary data for an individual session or job, small files only. Local to every node.

SSD on shared (GPIL) nodes 

(*f QoSs)

Deleted at the end of session or job

Created per session/ job


3 GB per session/job by default.

Customisable up to 40 GB with 

--gres=ssdtmp:<size>G


RAM on exclusive parallel compute nodes nodes

(*p QoSs)

no limit (maximum memory of the node)

...