...
File System | Suitable for ... | Technology | Features | Quota | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| HOME | permanent files, e. g. profile, utilities, sources, libraries, etc. | NFS | It is backed up. Snapshots available. See AG: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) |
| ||||||||||
| PERM | permanent files without the need for automated backups, smaller input files for serial or small processing, etc. | NFS | No backup Snapshots available. See AG: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) DO NOT USE IN PARALLEL APPLICATIONS DO NOT USE FOR JOB STANDARD OUTPUT/ERROR |
| ||||||||||
| HPCPERM | permanent files without the need for automated backups, bigger input files for parallel model runs, climate files, std output, etc. | Lustre | No Backup No snapshots No automatic deletion |
| ||||||||||
| SCRATCH | all temporary (large) files. Main storage for your jobs and experiments input and output files. | Lustre | Automatic deletion after 30 days of last access - implemented since 27 March 2023 No snapshots No backup |
| ||||||||||
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre | Deleted at the end of session or job Created per session/ job as a subdirectory in SCRATCH | part of SCRATCH quota | ||||||||||
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | NVME | Deleted at the end of session or job Created per session/ job |
To request more space in your jobs you can use the Slurm directive:
For ecinteractive and Jupyterhub sessions, space and limits are shared with LOCALSSD | LOCALSSD | NVME | Archived automatically at the end of session or job into Recover manually on next session with:
| Space and limits shared with TMPDIR |
| Tip | ||
|---|---|---|
| ||
Those filesystems can be conveniently referenced from your session and scripts using the environment variables of the same name: $HOME, $PERM, $HPCPERM, $SCRATCH, $SCRATCHDIR ,and $TMPDIR and $LOCALSSD. |
| Tip | |||||
|---|---|---|---|---|---|
| |||||
When running on the shared GPIL nodes (*f and *i QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:
With <size> being a number up to 20 GB on ECS and 100 GB on HPCF. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:
|
| Tip | ||
|---|---|---|
| ||
You can check your current usage and limits with the "quota" command on hpc-login. |
Filesystem structure
You will notice your filesystems have now a flat name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:
...
|
Special directories on RAM
Some special directories are not disk-based but actually mapped into the node's main memory. They are unique for every session, and are limited by the memory resources requested in the job.
...
| Note | ||
|---|---|---|
| ||
Any data left on those spaces will be automatically deleted at the end of the job or session. | ||
| Show If | ||
| ||
Project FilesystemsCertain projects with special requirements have dedicated filesystems or volumes that are automatically mounted under Those project volumes share the same backend as PERM, so the similar features apply:
The quota command will not show the limits for those filesystems. If you have any queries about them, please do raise an issue via our ECMWF Support Portal. |