...
These are the different QoS (or queues) available for standard users on the four complexes:
| QoS name | Type | Suitable for... | Shared nodes | Maximum jobs per user |
|---|
GPU special Partition
On the AC complex you will find dedicated queues for special partition with GPU-enabled nodes. See AG: GPU usage for AI and Machine Learning for all the details on how to make use of those special resources.
...
| Maximum |
|---|
...
| nodes per user | Default / Max Wall Clock Limit | Default / Max CPUs | Default / Max Memory per node | |||||
|---|---|---|---|---|---|---|---|---|
| ng | GPU | serial and small parallel jobs with GPU. It is de default | Yes | - | 4 | average runtime + standard deviation / 2 days | 1 / - | 8 GB / 500 GB |
| dg | GPU | short debug jobs requiring GPU | Yes | 1 | 2 | average runtime + standard deviation / 30 min | 1 / - | 8 GB / 500 |
...
ECS
For those using ECS, these are the different QoS (or queues) available for standard users of this service:
| GB |
serial and small parallel Time-Critical jobs.
Only usable through ECACCESS Time Critical Option-1
Interactive sessions - ecinteractive
Using the "ecinteractive" command, jobs will be submitted in either of these queues depending on if the user has access to the full HPC service (ni queue, for Member State users) or the ECS service (ei queue, for co-operating States, as service previously known as ECGATE)
| Multiexcerpt | |||||||
|---|---|---|---|---|---|---|---|
| |||||||
| QoS name | Type | Suitable for... | Shared nodes | Maximum jobs per user | Default / Max Wall Clock Limit | Default / Max CPUs | Default / Max Memory |
| ni | interactive | serial and small parallel interactive jobs | Yes | 1 | 12 hours / 7 days | 1 / 32 | 8 GB / 32 GB |
| ei | interactive | serial and small parallel interactive jobs - ECGATE service | Yes | 1 | 12 hours / 7 days | 1 / 4 | 8 GB / 8 GB |
| Info | |||||||
| |||||||
See AG: Job Runtime Management for more information on how the default Wall Clock Time limit is calculated. |
...
| Tip | ||
|---|---|---|
| ||
If you want to get all the details of a particular QoS on the system, you may run, for example:
|
Submitting jobs remotely
If you are submitting jobs from a different platform via ssh, please use the *ag-batch dedicated nodes node instead of the *-login equivalents:
- For generic remote job submission on HPCF: hpc-batch or hpc2020-batch
- For remote job submission on a specific HPCF complex: <complex_name>-batch
- For remote job submission to the ECS virtual complex: ecs-batch
For example, to submit a job from a remote platform onto the Atos HCPF: equivalent
| No Format |
|---|
ssh hpcag-batch "sbatch myjob.sh" |
| HTML |
|---|
<style>
div#content h2 a::after {
content: " - [read more]";
}
</style> |
...