Warning |
---|
Due to the unavailability of NFS-based filesystems at this stage, HOME and PERM are temporarily setup on Lustre. |
The filesystems available are HOME, PERM, HPCPERM and SCRATCH, and are completely isolated from those in other ECMWF platforms in Reading such as ECGATE or the Cray HPCF.
...
File System | Suitable for ... | Technology | Features | Quota | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HOME | permanent files, e. g. . profile, utilities, sources, libraries, etc. | NFS | It is backed up . Limited. Snapshots available. See HPC2020: Recovering data from snapshots Throttled I/O bandwidth from parallel compute nodes (less performance ) |
| ||||||||||||
PERM | permanent files without the need for automated backups, smaller input files for serial or small processing, std output, etc. | NFS | No backup Snapshots available. See HPC2020: Recovering data from snapshots Throttled It is backed up. LimitedI/O bandwidth from parallel compute nodes (less performance | 1 TB | ) DO NOT USE IN PARALLEL APPLICATIONS DO NOT USE FOR JOB STANDARD OUTPUT/ERROR |
| ||||||||||
HPCPERM | permanent files without the need for automated backups, bigger input files for parallel model runs, climate files, std output, etc. | NFSLustre | No Backup No snapshots No automatic deletion 10TB |
| ||||||||||||
SCRATCH | all temporary (large) files. Main storage for your jobs and experiments input and output files. | Lustre | Automatic deletion to be configured at a later stageafter 30 days of last access - implemented since 27 March 2023 No snapshots No backup 50TB |
| ||||||||||||
SCRATCHDIR | Big temporary data for an individual session or job, not as fast as TMPDIR but higher capacity. Files accessible from all cluster. | Lustre | Deleted at the end of session or job Created per session/ job as a subdirectory in SCRATCH | part of SCRATCH quota | ||||||||||||
TMPDIR | Fast temporary data for an individual session or job, small files only. Local to every node. | SSD on shared (GPIL) nodes (*f QoSs) | Deleted at the end of session or job Created per session/ job | 3 GB per session/job by default. Customisable up to 40 GB with
| ||||||||||||
RAM on exclusive parallel compute nodes nodes (*p QoSs) | no limit (maximum memory of the node) |
...
Tip | ||||
---|---|---|---|---|
| ||||
When running on the shared GPIL nodes (*f QoSs), you may request a bigger space in the SSD-backed TMPDIR with the extra SBATCH option:
With <size> being a number up to 40 GB. If that is still not enough for you, you may point your TMPDIR to SCRATCHDIR:
|
...
Tip | ||
---|---|---|
| ||
You can check your current usage and limits with the "quota" command. |
...
Filesystem structure
You will notice your filesystems have now a flatter and simpler flat name structure. If you port any scripts or code that had paths to filesystems hardcoded from older platforms, please make sure you update them. Where possible, try and use the environment variables provided, which should work on both sides pointing to the right location in each case:
...
Users should instead use the general purpose file systems available, and in particular, $TMPDIR or $SCRATCHDIR for temporary storage per session or job. See HPC2020: Filesystems.
Note | ||
---|---|---|
| ||
Any data left on those spaces will be automatically deleted at the end of the job or session. |
Special Filesystems
See hereafter a list of the specialised filesystems on Atos:
Directory | Content | Comment |
---|---|---|
/ec/vol/msbackup | Backup of the Conventional observations for the four days. One file per synoptical cycle. | Available both on ecs and hpc. |
Show If | ||
---|---|---|
| ||
Project FilesystemsCertain projects with special requirements have dedicated filesystems or volumes that are automatically mounted under Those project volumes share the same backend as PERM, so the similar features apply:
The quota command will not show the limits for those filesystems. If you have any queries about them, please do raise an issue via our ECMWF Support Portal. |