The filesystems available are HOME, PERM, HPCPERM and SCRATCH, and are completely isolated from those in other ECMWF platforms in Reading such as ECGATE or the Cray HPCF.
Filesystems from those platforms are not cross-mounted either. This means that if you need to use data from another ECMWF platform such as ECGATE or the Cray HPCF, you will need to so transfer it first using scp or rsync. See HPC2020: File transfers for more information.
File System | Suitable for ... | Technology | Features | Quota |
---|---|---|---|---|
HOME | permanent files, e. g. profile, utilities, sources, libraries, etc. | NFS | It is backed up. Snapshots available. See HPC2020: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) | 10GB |
PERM | permanent files without the need for automated backups, smaller input files for serial or small processing, etc. | NFS | No backup Snapshots available. See HPC2020: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) DO NOT USE IN PARALLEL APPLICATIONS DO NOT USE FOR JOB STANDARD OUTPUT/ERROR | 500 GB |
HPCPERM | permanent files without the need for automated backups, bigger input files for parallel model runs, climate files, std output, etc. | Lustre | No Backup No snapshots No automatic deletion |
|
SCRATCH | all temporary (large) files. Main storage for your jobs and experiments input and output files. | Lustre | Automatic deletion after 30 days of last access - implemented since 27 March 2023 No snapshots No backup |
|
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre | Deleted at the end of session or job Created per session/ job as a subdirectory in SCRATCH | part of SCRATCH quota |
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | SSD on shared (GPIL) nodes (*f QoSs) | Deleted at the end of session or job Created per session/ job | On ECS:
On HPC:
To request more space in your jobs you can use the Slurm directive:
For ecinteractive and Jupyterhub sessions, space and limits are shared with LOCALSSD |
RAM on exclusive parallel compute and GPU nodes (*p and *g QoSs) | no limit (maximum memory of the node) | |||
LOCALSSD | Fast, local, non-critical data and files used in ecinteractive and JupyterHub sessions. Its contents are automatically archived when the session finishes so users can restore them on their next session and carry on where they left off. Can be used for development/compilation of projects interactively. See HPC2020: Local SSD storage for interactive sessions for more information on how to use this feature. | SSD on shared (GPIL) nodes for ecinteractive and JupyterHub sessions only. (ni QoS) GPU nodes are excluded | Archived automatically at the end of session or job into Recover manually on next session with:
| On ECS:
On HPC:
Space and limits shared with TMPDIR |
Environment variables
Those filesystems can be conveniently referenced from your session and scripts using the environment variables of the same name: $HOME, $PERM, $HPCPERM, $SCRATCH, $SCRATCHDIR, $TMPDIR and $LOCALSSD.
$TEMP, which in the past was an alias to $SCRATCH, has been deprecated and is no longer defined. Please use $SCRATCH instead
More capacity for $TMPDIR?
When running on the shared GPIL nodes (*f and *i QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:
#SBATCH --gres=ssdtmp:<size>G
With <size> being a number up to 20 GB on ECS and 100 GB on HPCF. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:
export TMPDIR=$SCRATCHDIR
Note that on interactive, non-gpu sessions using ecinteractive and Jupyterhub, the local space on disk is shared between TMPDIR and LOCALSSD.
How much have I used?
You can check your current usage and limits with the "quota" command.
Filesystem structure
You will notice your filesystems have now a flat name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:
Before | After |
---|---|
/home/group/user or /home/ms/group/user | /home/user |
/perm/group/user or /perm/ms/group/user | /perm/user |
- | /hpcperm/user |
/scratch/group/user or /scratch/ms/group/user | /scratch/user |
Special directories on RAM
Some special directories are not disk-based but actually mapped into the node's main memory. There are no limits set on exclusive nodes running on parallel queues. However, when running on shared nodes (GPILs) on fractional or interactive queues, this could lead to a single application or user exhausting all the memory of the node and thus impacting others. This is why the following limits are set:
Directory | New limit |
---|---|
/tmp | 428 GB (80% of available memory) per user's session |
/var/tmp | 428 GB (80% of available memory) per user's session |
/dev/shm | 428 GB (80% of available memory) per user's session |
$XDG_RUNTIME_DIR | 64 MB per user's session |
Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job.
Automatic cleanup
Any data left on those spaces will be automatically deleted at the end of the job or session.
Special Filesystems
See hereafter a list of the specialised filesystems on Atos:
Directory | Content | Comment |
---|---|---|
/ec/vol/msbackup | Backup of the Conventional observations for the four days. One file per synoptical cycle. | Available both on ecs and hpc. |
2 Comments
Luke Jones
It's interesting I can write far more to /var/tmp RAM than to $TMPDIR SSD. If I'm on a shared node, retrieving GRIB from MARS and processing it to create a much smaller permanent file, which do you suggest I use for the temporary GRIB storage? It sounds like it would be fastest to use /var/tmp but would it be inconsiderate use of RAM?
Xavier Abellan
Luke Jones , I would encourage to use the $TMPDIR if sizes are small. As stated above, when running on the shared GPIL nodes (*f QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:
With <size> being a number up to 40 GB. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:
We do not recommend using those special directories on RAM because of the impact they may have on others sharing the node with you.