Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Warning

Due to the unavailability of NFS-based filesystems at this stage, HOME is temporarily setup on Lustre and PERM is not available yet.

The filesystems available are HOME, PERM, HPCPERM and SCRATCH, and are completely isolated from those in other ECMWF platforms in Reading such as ECGATE or the Cray HPCF.

...

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. profile, utilities, sources, libraries, etc. NFS

It is backed up

. Limited

.

Snapshots available. See HPC2020: Recovering data from snapshots

Throttled I/O bandwidth from parallel compute nodes (less performance

)


Show If
groupecmwf
  • 100GB for ECMWF staff
  • 10 GB for Member State users

Hide If
groupecmwf

10GB

20GB


PERMpermanent files without the need for automated backups, smaller input files for serial or small processing, std output, etc.

NFS

No backup

Snapshots available.  See HPC2020: Recovering data from snapshots

Throttled

It is backed up. Limited

I/O bandwidth from parallel compute nodes (less performance

)

DO NOT USE IN PARALLEL APPLICATIONS

DO NOT USE FOR JOB STANDARD OUTPUT/ERROR


Show If
groupecmwf
  • 10 TB for ECMWF staff
  • 500 GB for Member State users


Hide If
groupecmwf

500 GB

1 TB


HPCPERMpermanent files without the need for automated backups,  bigger input files for parallel model runs, climate files, std output, etc.NFSLustre

No Backup

No snapshots

No automatic deletion

10TB


Show If
groupecmwf
  • 1 TB for ECMWF staff
  • 100 GB for Member State users without HPC access
  • 1 TB for Member State users with HPC access


Hide If
groupecmwf
  • 100 GB for users without HPC access
  • 1 TB for users with HPC access


SCRATCHall temporary (large) files. Main storage for your jobs and experiments input and output files.Lustre

Automatic deletion to be configured at a later stage

No backup

50TB

after 30 days of last access - implemented since 27 March 2023

No snapshots

No backup


Show If
groupecmwf
  • 50 TB for ECMWF staff and Member State users with HPC access
  • 2 TB for users without HPC access


Hide If
groupecmwf
  • 50 TB for users with HPC access
  • 2 TB for users without HPC access


SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.Lustre

Deleted at the end of session or job

Created per session/ job as a subdirectory in SCRATCH

part of SCRATCH quota

TMPDIR

Fast temporary data for an individual session or job, small files only. Local to every node.

SSD on shared (GPIL) nodes 

(*f QoSs)

Deleted at the end of session or job

Created per session/ job


3 GB per session/job by default.

Customisable up to 40 GB with 

--gres=ssdtmp:<size>G


RAM on exclusive parallel compute nodes nodes

(*p QoSs)

no limit (maximum memory of the node)

...

Tip
titleMore capacity for $TMPDIR?

When running on the shared GPIL nodes (*f QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:

No Format
#SBATCH --gres=ssdtmp:<size>G

With <size> being a number up to 40 GB. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:

No Format
export TMPDIR=$SCRATCHDIR


...

Tip
titleHow much have I used?

You can check your current usage and limits with the "quota" command.

...

Filesystem structure

You will notice your filesystems have now a flatter and simpler flat name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:

...

Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job. See HPC2020: Filesystems.

Note
titleAutomatic cleanup

Any data left on those spaces will be automatically deleted at the end of the job or session.

Special Filesystems

See hereafter a list of the specialised filesystems on Atos:

DirectoryContentComment
/ec/vol/msbackupBackup of the Conventional observations for the four days.
One file per synoptical cycle.
Available both on ecs and hpc.


Show If
groupecmwf

Project Filesystems

Certain projects with special requirements have dedicated filesystems or volumes that are automatically mounted under /ec/vol/<project_name> on Atos GPIL shared nodes and VDI on first access. They are not available on Atos HPCF parallel compute nodes for performance reasons.

Those project volumes share the same backend as PERM, so the similar features apply:

  • NFS based,
  • No backups

The quota command will not show the limits for those filesystems. If you have any queries about them, please do raise an issue via our ECMWF Support Portal.