Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Provision new Centos or Ubuntu instance.
  2. Select layout ending with eumetsat-gpu and one of the plans listed above. Beside that, configure your instance as preferred and continue deployment process.
  3. Once VM is deployed, you can verify GPUs for example using nvidia-smi program from command line (see below for confirming library installations and drivers).

Usage

Useful commands

You can see GPU information using nvidia-smi 

...

Code Block
$ export PATH=$PATH:/usr/local/cuda-11.4/bin/

Libraries

CUDA version is currently 11.4 which need to be the same with drivers and thus can't be changed.  Tensorflow library compatibility is available at: https://www.tensorflow.org/install/source#gpu. We have tested that TensorFlow > 2.6.1 work.

Using Conda

Update and conda installation

...

Code Block
$ nvidia-smi
Mon Jan  8 10:24:59 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTXA6000...  On   | 00000000:00:05.0 Off |                    0 |
| N/A   N/A    P8    N/A /  N/A |   3712MiB / 48895MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

$ python3 --version
Python 3.8.18

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0

$ whereis cuda
cuda: /usr/local/cuda

$ cat /home/<USERNAME>/miniforge3/envs/myenv/include/cudnn.h
.
.
.
/*   cudnn : Neural Networks Library

*/

#if !defined(CUDNN_H_)
#define CUDNN_H_

#include <cuda_runtime.h>
#include <stdint.h>

#include "cudnn_version.h"
#include "cudnn_ops_infer.h"
#include "cudnn_ops_train.h"
#include "cudnn_adv_infer.h"
#include "cudnn_adv_train.h"
#include "cudnn_cnn_infer.h"
#include "cudnn_cnn_train.h"

#include "cudnn_backend.h"

#if defined(__cplusplus)
extern "C" {
#endif

#if defined(__cplusplus)
}
#endif

#endif /* CUDNN_H_ */

$ conda list | grep tensorflow
tensorflow                2.13.1          cuda118py38h409af0c_1    conda-forge
tensorflow-base           2.13.1          cuda118py38h52ca5c6_1    conda-forge
tensorflow-estimator      2.13.1          cuda118py38ha2f8a09_1    conda-forge
tensorflow-gpu            2.13.1          cuda118py38h0240f8b_1    conda-forge

$ conda list | grep keras
keras                     2.13.1             pyhd8ed1ab_0    conda-forge

$ python
import tensorflow as tf
tf.test.is_built_with_cuda()
True
tf.config.list_physical_devices('GPU')
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
print(tf.__version__)
2.13.1


#Using Docker

If you want to use GPUs in docker, you need to take few extra steps after creating the VM.

...