You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The filesystems available are HOME, PERM, HPCPERM and SCRATCH, and are completely isolated from those in other ECMWF platforms in Reading such as ECGATE or the Cray HPCF.

Filesystems from those platforms are not cross-mounted either. This means that if you need to use data from another ECMWF platform such as ECGATE or the Cray HPCF, you will need to so transfer it first using scp or rsync. See AG: File transfers for more information.

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. profile, utilities, sources, libraries, etc. NFS

It is backed up.

Snapshots available. See AG: Recovering data from snapshots

Throttled I/O bandwidth from parallel compute nodes (less performance)


10GB

PERMpermanent files without the need for automated backups, smaller input files for serial or small processing, etc.

NFS

No backup

Snapshots available.  See AG: Recovering data from snapshots

Throttled I/O bandwidth from parallel compute nodes (less performance)

DO NOT USE IN PARALLEL APPLICATIONS

DO NOT USE FOR JOB STANDARD OUTPUT/ERROR

500 GB

HPCPERMpermanent files without the need for automated backups,  bigger input files for parallel model runs, climate files, std output, etc.Lustre

No Backup

No snapshots

No automatic deletion

  • 100 GB for users without HPC access
  • 1 TB for users with HPC access
SCRATCHall temporary (large) files. Main storage for your jobs and experiments input and output files.Lustre

Automatic deletion after 30 days of last access - implemented since 27 March 2023

No snapshots

No backup

  • 50 TB for users with HPC access
  • 2 TB for users without HPC access

SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.Lustre

Deleted at the end of session or job

Created per session/ job as a subdirectory in SCRATCH

part of SCRATCH quota

TMPDIR

Fast temporary data for an individual session or job, small files only. Local to every node.

NVME 


Deleted at the end of session or job

Created per session/ job


  • 3 GB per session/job by default.
  • Customisable up to 100 GB
  • Shared quota with LOCALSSD

To request more space in your jobs you can use the Slurm directive:

--gres=ssdtmp:<size>G

For ecinteractive and Jupyterhub sessions, space and limits are shared with LOCALSSD

LOCALSSD

Fast, local, non-critical data and files used in ecinteractive and JupyterHub sessions. Its contents are automatically archived when the session finishes so users can restore them on their next session and carry on where they left off. Can be used for development/compilation of projects interactively.

See AG: Local SSD storage for interactive sessions for more information on how to use this feature.


NVME

Archived automatically at the end of session or job into $HPCPERM/.DO_NOT_DELETE_LOCALSSD_ARCHIVE

Recover manually on next session with:

ec_restore_local_ssd


Space and limits shared with TMPDIR

Environment variables

Those filesystems can be conveniently referenced from your session and scripts using the environment variables of the same name: $HOME, $PERM, $HPCPERM$SCRATCH, $SCRATCHDIR, $TMPDIR and $LOCALSSD.

More capacity for $TMPDIR?

When running on the shared GPIL nodes (*f and *i QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:

#SBATCH --gres=ssdtmp:<size>G

With <size> being a number up to 20 GB on ECS and 100 GB on HPCF. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:

export TMPDIR=$SCRATCHDIR

Note that on interactive, non-gpu sessions using ecinteractive and Jupyterhub, the local space on disk is shared between TMPDIR and LOCALSSD.


How much have I used?

You can check your current usage and limits with the "quota" command.

Filesystem structure

You will notice your filesystems have now a flat name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:

BeforeAfter
/home/group/user or /home/ms/group/user/home/user
/perm/group/user or /perm/ms/group/user/perm/user
-/hpcperm/user
/scratch/group/user or /scratch/ms/group/user/scratch/user

Special directories on RAM

Some special directories are not disk-based but actually mapped into the node's main memory. They are unique for every session, and are limited by the memory resources requested in the job.

Directory
/tmp
/var/tmp
/dev/shm
$XDG_RUNTIME_DIR

Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job.

Automatic cleanup

Any data left on those spaces will be automatically deleted at the end of the job or session.

  • No labels