You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

TEMS is to be retired at the end of October 2021. See more information here .


The filesystems available are HOME, PERM, HPCPERM and SCRATCH, and are completely isolated from those in other ECMWF platforms in Reading such as ECGATE or the Cray HPCF.

Filesystems from those platforms are not cross-mounted either. This means that if you need to use data from another ECMWF platform such as ECGATE or the Cray HPCF, you will need to so transfer it first using scp or rsync. See HPC2020: File transfers for more information.

File System

Suitable for ...

TechnologyFeaturesQuota
HOMEpermanent files, e. g. .profile, utilities, sources, libraries, etc. NFSIt is backed up. Limited I/O performance20GB
PERMpermanent files without the need for automated backups, smaller input files for serial or small processing, std output, etc.NFSIt is backed up. Limited I/O performance20GB
HPCPERMpermanent files without the need for automated backups,  bigger input files for parallel model runs, climate files, etc.NFS

No Backup

No automatic deletion

1TB
SCRATCHall temporary (large) files. Main storage for your jobs and experiments input and output files.Lustre

Automatic deletion to be configured at a later stage

No backup

4TB

SCRATCHDIR

Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster.Lustre

Deleted at the end of session or job

Created per session/ job as a subdirectory in SCRATCH

part of SCRATCH quota

TMPDIR

Fast temporary data for an individual session or job, small files only. Local to every node.

SSD on shared nodes 

(*f QoSs)

Deleted at the end of session or job

Created per session/ job


3 GB per session/job by default.

Customisable up to 40 GB with 

--gres=ssdtmp:<size>G


RAM on exclusive compute nodes 

(*p QoSs)

no limit (maximum memory of the node)

Environment variables

Those filesystems can be conveniently referenced from your session and scripts using the environment variables of the same name: $HOME, $PERM, $HPCPERM$SCRATCH, $SCRATCHDIR and $TMPDIR.

$TEMP, which in the past was an alias to $SCRATCH, has been deprecated and is no longer defined. Please use $SCRATCH instead

More capacity for $TMPDIR?

When running on the shared nodes (*f QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:

#SBATCH --gres=ssdtmp:<size>G

With <size> being a number up to 40 GB. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:

export TMPDIR=$SCRATCHDIR


How much have I used?

You can check your current usage and limits with the "quota" command

New filesystem structure

You will notice your filesystems have now a flatter and simpler name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:

BeforeAfter
/home/group/user or /home/ms/group/user/home/user
/perm/group/user or /perm/ms/group/user/perm/user
-/hpcperm/user
/scratch/group/user or /scratch/ms/group/user/scratch/user

Special directories on RAM

On TEMS, some special directories are not disk-based but actually mapped into the node's main memory. While there are no limits set on exclusive nodes (when running On shared nodes, this could lead to a single application or user consuming a great portion of the total memory available or filling up the existing limits and impacting others. This is why the following limits are set:

DirectoryNew size/limit
/tmp64 MB per user
/var/tmp64 MB per user
/dev/shm1 GB per user
$XDG_RUNTIME_DIR64 MB per user

Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job. See HPC2020: FilesysHPC2020.

Automatic cleanup

Any data left on those spaces will be automatically deleted at the end of the job or session.

  • No labels