You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Slurm is the batch system available. Any script can be submitted as a job with no changes, but you might want to see Writing SLURM jobs to customise it.

To submit a script as a serial job with default options enter the command:

sbatch yourscript.sh

You may query the queues to see the jobs currently running or pending with:

squeue

And cancel a job with

scancel <jobid>

See the Slurm documentation for more details on the different commands available to submit, query or cancel jobs.

QoS available

These are the different QoS (or queues) available for standard users on the four complexes:

QoS nameTypeSuitable for...Shared nodes Maximum jobs per userDefault / Max Wall Clock LimitDefault / Max CPUsDefault / Max Memory
nffractionalserial and small parallel jobs. It is the defaultYes-6 hours / 2 days1 / 648 GB / 128 GB
niinteractiveserial and small parallel interactive jobsYes11 day  / 7 days1 / 328 GB / 32 GB
npparallelparallel jobs requiring more than half a nodeNo-6 hours / 2 days--

ECS

For those using ECS, these are the different QoS (or queues) available for standard users of this service:

QoS nameTypeSuitable for...Shared nodes Maximum jobs per userDefault / Max Wall Clock LimitDefault / Max CPUsDefault / Max Memory
effractionalserial and small parallel jobs - ECGATE serviceYes-12 hours / 2 days1 / 88 GB / 16 GB
eiinteractiveserial and small parallel interactive jobs - ECGATE serviceYes112 hours  / 7 days1 / 48 GB / 8 GB
ellongserial and small parallel interactive jobs - ECGATE serviceYes-12 hours  / 7 days1 / 88 GB / 16 GB
etTime-critical Option 1

serial and small parallel Time-Critical jobs.

Only usable through ECACCESS Time Critical Option-1

Yes-12 hours  / 12 hours1 / 88 GB / 16 GB


Checking QoS setup

If you want to get all the details of a particular QoS on the system, you may run, for example:

sacctmgr list qos names=nf

Work in progress

Different limits on the different QoSs may be introduced or changed as the system evolves to its final configuration.

Submitting jobs remotely

If you are submitting jobs from a different platform via ssh, please use the *-batch dedicated nodes instead of the *-login equivalents:

  • For generic remote job submission on HPCF: hpc-batch or hpc2020-batch
  • For remote job submission on a specific HPCF complex: <complex_name>-batch
  • For remote job submission to the ECS virtual complex: ecs-batch

For example, to submit a job from a remote platform onto the Atos HCPF: 

ssh hpc-batch "sbatch myjob.sh"


  • No labels