This example has been tested on EUMETSAT side of the EWC.

Tensorflow library has many dependencies that interest all the stack (from application level to hardware) and it is released quite often by the community. For the purpose of this documention for GPUs on Tensorflow. The following assumptions have been considered:

  • Python 3.6 - 3.8 installed.

In order to run the following example you need to have the following packages in your environment:

  • tensorflow

You can check this documentation for Install package in Python environment and handle python environments for reproducibility.

CentOS 7

Install the prerequisites and Tensorflow 2.8:

sudo yum -y install epel-release
sudo yum update -y
sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel bzip2-devel libffi-devel xz-devel
pipenv install tensorflow==2.8.0
OS=rhel7 && \ 
sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.repo && \
sudo yum clean all && \
sudo yum install -y sudo yum install libcudnn8.x86_64 libcudnn8-devel.x86_64 

Then check if Tensorflow sees the GPUs by running:

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

You should see an output similar to this one:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]


Ubuntu 20.04

Begin by installing the nvidia-cuda-toolkit:

sudo apt update
sudo apt upgrade
sudo apt install openjdk-11-jdk

After installing the nvidia-cuda-toolkit, you can now install cuDNN 8.4.0 by downloading it from this link. You’ll be asked to login or create an NVIDIA account. After logging in and accepting the terms of cuDNN software license agreement, you will see a list of available cuDNN software.

Once downloaded, untar the file, copy its ingredients to your cuda libraries and change permissions:

tar -xvf cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive.tar.xz
sudo cp -v cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive/include/cudnn.h /usr/local/cuda/include/
sudo cp -v cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive/lib/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/include:$LD_LIBRARY_PATH' >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64
source ~/.bashrc

Now install Tensorflow with pipenv (or conda):

pipenv install tensorflow==2.8.0

Then check if Tensorflow sees the GPUs by running:

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

You should see an output similar to this one:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]