Introduction

EUMETSAT infrastructure contains RX A6000 NVIDIA GPU cards. To employ the GPU, one need to provision one of the following flavors:

Flavor namevCPURAMvGPU TypevGPU RAMSSD storage (GB)
vm.a6000.1214 GBRTXA6000-6C6 GB40
vm.a6000.2428 GBRTXA6000-12C12 GB80
vm.a6000.4856 GBRTXA6000-24C24 GB160
vm.a6000.816112 GBRTXA6000-48C48 GB320

Provision

To use the GPUs:

  1. Provision new Ubuntu GPU instance.
  2. Select layout ending with eumetsat-gpu and one of the plans listed above. Beside that, configure your instance as preferred and continue deployment process.
  3. Once VM is deployed, you can verify GPUs for example using nvidia-smi program from command line (see below for confirming library installations and drivers).

Usage

Essential commands

You can see GPU information using nvidia-smi 

Checking the GPU drivers
# Login to your instance and run below command
$ nvidia-smi

# Check if the input you received shows the NVIDIA-SMI, Driver and CUDA versions. You can also see the GPU hardware (e.g., RTXA6000-6C) and the GPU memory
Mon Apr 13 11:41:49 2026       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06             Driver Version: 570.124.06     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTXA6000-48Q            On  |   00000000:00:05.0 Off |                    0 |
| N/A   N/A    P8            N/A  /  N/A  |       1MiB /  49152MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

As of the 13th of April 2026, GPU instances come with CUDA 12.8 and NVIDIA driver 570.124.06. The instructions here have been tested with these versions, but there is no guarantee that they will all work with future versions.


Adding NVIDIA tools to path
# NVIDIA tools are available in /usr/local/cuda-<cuda_version>/bin/. You can add them to PATH following:
$ export PATH=$PATH:/usr/local/cuda-<cuda_version>/bin/

Installing Libraries

You can install a variety of libraries using different methods. Below, we have a basic tutorial showing you how you can install libraries such as TensorFlow, Keras and PyTorch with conda package manger.  Tensorflow library compatibility is available at: https://www.tensorflow.org/install/source#gpu.

Using conda

Since October 2024, pytorch has officially stopped supported installations through conda. If you intend to use pytorch  in your project, we recommed you use the pip  installation method below.


Conda installation
# install miniforge (or any conda manager)
$ wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh

# make it executable
$ chmod +x Miniforge3-Linux-x86_64.sh

# run and install the executable
$ ./Miniforge3-Linux-x86_64.sh


Library installations with conda
# create a conda environment called ML with a spcecific Python version, e.g. 3.8
$ conda create -n ML python=3.8

# activate the environment
$ conda activate ML

# install packages, note that installing tensorflow-gpu and keras also installs many number of extra libraries such as CUDA toolkit, cuDNN (CUDA Deep Neural Network library), Numpy, Scipy, Pillow
(ML) $ conda install tensorflow-gpu keras

# (OPTIONAL) cudatoolkit is installed automatically while installing keras and tensorflow-gpu, but if you need a specific (or latest) version run below command.
(ML) $ conda install -c anaconda cudatoolkit

Using pip

Library installations with pip
# create a python environment
$ python3 -m venv .venv

# activate this environment
$ source .venv/bin/activate

# upgrade pip
(.venv) $ python3 -m pip install --upgrade pip

# install tensorflow packages, note that the GPU version of tensorflow requires the installation of the CUDA toolkit, as well as other libraries such as cuDNN (CUDA Deep Neural Network library).
(.venv) $ python3 -m pip install 'tensorflow[and-cuda]'

# install keras
(.venv) $ python3 -m pip install keras

# install pytorch, note that the url needs to be specified to work with a specific version of CUDA, in this case 12.8
(.venv) $ python3 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128


Using Docker

If you want to use GPUs in docker, you need to take few extra steps after creating the VM.

Install docker on Ubuntu
$ sudo apt install -y docker.io
$ sudo usermod -aG docker $USER
Install docker on CentOS
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce docker-ce-cli containerd.io
$ sudo systemctl --now enable docker
$ sudo usermod -aG docker $USER

To provide support for docker to use the GPU, you need to install the NVIDIA Container Toolkit.  You can follow instructions on NVIDIA's website or basically do:

Install necessary packages for GPU support in Docker and restart docker on Ubuntu
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
Install necessary packages and restart docker on CentOS
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
$ sudo yum clean expire-cache && sudo yum install -y nvidia-docker2
$ sudo systemctl restart docker

Test the install with:

nvidia-smi in docker test
$ docker run  --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
Wed Feb 28 13:20:24 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02   Driver Version: 470.223.02   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTXA6000-6C  On   | 00000000:00:05.0 Off |                    0 |
| N/A   N/A    P8    N/A /  N/A |    512MiB /  5976MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

And run something useful..

Run tensorflow JupyterNotebooks
$ sudo docker run --gpus all --env NVIDIA_DISABLE_REQUIRE=1 -it --rm -v $(realpath ~/notebooks):/tf/notebooks -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter

Testing the installation

Regardless of the method of installation, to test whether the installation worked, and the GPU works as expected, here we run a few initial commands for different libraries & drivers for confirming the library integrations with the GPU.

As stated above, these tests were done with the versions of Python and CUDA present in instances created as of April 2026. The output might vary if older or newer versions are used.


Check python version
$ python3 --version
Python 3.10.12
Check NVIDIA Cuda compiler driver
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0
Check TensorFlow and keras installations in Conda environment
# Check with conda which versions are installed
$ conda list | grep -E 'tensorflow|keras|torch'
tensorflow                2.13.1          cuda118py38h409af0c_1    conda-forge
tensorflow-base           2.13.1          cuda118py38h52ca5c6_1    conda-forge
tensorflow-estimator      2.13.1          cuda118py38ha2f8a09_1    conda-forge
tensorflow-gpu            2.13.1          cuda118py38h0240f8b_1    conda-forge
keras                     2.13.1             pyhd8ed1ab_0    conda-forge
Check Tensorflow, Keras and Pytorch installations with pip
# Check with pip which versions are installed
$ pip freeze | grep -E 'tensorflow|keras|torch'
keras==3.12.1
tensorflow==2.21.0
torch==2.11.0+cu128
torchaudio==2.11.0+cu128
torchvision==0.26.0+cu128
Enter python interpreter
# Now test if the libraries work by running some commands in python
$ python
Check TensorFlow installation
>>> import tensorflow as tf

>>> tf.test.is_built_with_cuda()
True

>>> tf.config.list_physical_devices('GPU')
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

>>> tf.__version__
'2.21.0'
Check Pytorch and CUDA installation
>>> import torch

>>> torch.__version__
'2.11.0+cu128'

>>> torch.cuda.is_available()
True

>>> torch.version.cuda
'12.8'

# Create a tensor and move it to GPU
>>> x = torch.tensor([1.0, 2.0]).cuda()
tensor([1., 2.], device='cuda:0')