The European Weather Cloud is in the pilot phase, access to its resources may be limited. Please see the Terms and Conditions

Access to GPUs is not enabled by default for all tenants. Please raise an issue to the support portal or contact to request access if you wish to use them.

The current pilot infrastructure at ECMWF features 2x5 NVIDIA Tesla V100 cards targeting Machine Learning workloads. They are exposed as Virtual GPUS to the instances on the cloud, which allows for multiple VMs to transparently share the same Physical GPU card.

How to provision a GPU-enabled instance

Once your tenant is granted access to the GPUs, creating a new VM with access to a virtual GPU is very straightforward. Follow the process on Provision a new instance - web paying special attention on the Configuration Step:

  1. On the Library screen, choose CentOS. Ubuntu is currently not supported
  2. On Layout, select the item with "-gpu" suffix
  3. On Plan, pick one of the plans with the "gpu" suffix, depending on how much resources are needed, including the amount of GPU memory:
    1. 8cpu-4gbmem-20gbdisk-4gbgpu
    2. 8cpu-8gbmem-20gbdisk-4gbgpu
    3. 8cpu-32gbmem-40gbdisk-8gbgpu
    4. 16cpu-32gbmem-80gbdisk-16gbgpu
  4. Finish the process normally

Once your instance is running, you can check wether your instance can see the GPU with:

$> nvidia-smi
Tue Nov 17 15:20:38 2020       
| NVIDIA-SMI 440.87       Driver Version: 440.87       CUDA Version: 10.2     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GRID V100-4C        On   | 00000000:00:05.0 Off |                    0 |
| N/A   N/A    P0    N/A /  N/A |    304MiB /  4096MiB |      0%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|  No running processes found                                                 |