The filesystems available are HOME, PERM, HPCPERM and SCRATCH are the same as other ECMWF platforms such as HPCF, ECS and VDI, so no data transfers are needed from those platforms.
If you wish to transfer data to or from other sources, see AG: File transfers for more information.
File System | Suitable for ... | Technology | Features | Quota |
---|---|---|---|---|
HOME | permanent files, e. g. profile, utilities, sources, libraries, etc. | NFS | It is backed up. Snapshots available. See AG: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) | 10GB |
PERM | permanent files without the need for automated backups, smaller input files for serial or small processing, etc. | NFS | No backup Snapshots available. See AG: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance) DO NOT USE IN PARALLEL APPLICATIONS DO NOT USE FOR JOB STANDARD OUTPUT/ERROR | 500 GB |
HPCPERM | permanent files without the need for automated backups, bigger input files for parallel model runs, climate files, std output, etc. | Lustre | No Backup No snapshots No automatic deletion |
|
SCRATCH | all temporary (large) files. Main storage for your jobs and experiments input and output files. | Lustre | Automatic deletion after 30 days of last access - implemented since 27 March 2023 No snapshots No backup |
|
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre | Deleted at the end of session or job Created per session/ job as a subdirectory in SCRATCH | part of SCRATCH quota |
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | NVME | Deleted at the end of session or job Created per session/ job |
To request more space in your jobs you can use the Slurm directive:
|
Environment variables
Those filesystems can be conveniently referenced from your session and scripts using the environment variables of the same name: $HOME, $PERM, $HPCPERM, $SCRATCH, $SCRATCHDIR and $TMPDIR
More capacity for $TMPDIR?
When running on the shared GPIL nodes (*f and *i QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:
#SBATCH --gres=ssdtmp:<size>G
With <size> being a number up to 20 GB on ECS and 100 GB on HPCF. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:
export TMPDIR=$SCRATCHDIR
How much have I used?
You can check your current usage and limits with the "quota" command on hpc-login.
ssh hpc-login quota
Special directories on RAM
Some special directories are not disk-based but actually mapped into the node's main memory. They are unique for every session, and are limited by the memory resources requested in the job.
Directory |
---|
/tmp |
/var/tmp |
/dev/shm |
$XDG_RUNTIME_DIR |
Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job.
Automatic cleanup
Any data left on those spaces will be automatically deleted at the end of the job or session.