You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

You have to have Python 3.6 - 3.8 installed. Other versions do not work with Tensorflow. 

It is highly recommended to install Tensorflow inside a virtual environment using Conda or virtualenv, here is a simple example using virtualenv:

python3 -m pip install virtualenv
virtualenv gpu
source gpu/bin/activate

CentOS 7

Install the prerequisites and Tensorflow 2.8:

sudo yum -y install epel-release
sudo yum update -y
sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel bzip2-devel libffi-devel xz-devel
python3 -m pip install tensorflow==2.8.0
OS=rhel7 && \ 
sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/${OS}/x86_64/cuda-${OS}.repo && \
sudo yum clean all && \
sudo yum install -y sudo yum install libcudnn8.x86_64 libcudnn8-devel.x86_64 

Then check if Tensorflow sees the GPUs by running:

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

You should see an output similar to this one:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]


Ubuntu 20.04

Begin by installing the nvidia-cuda-toolkit:

sudo apt update
sudo apt upgrade
sudo apt install openjdk-11-jdk

After installing the nvidia-cuda-toolkit, you can now install cuDNN 8.4.0 by downloading it from this link. You’ll be asked to login or create an NVIDIA account. After logging in and accepting the terms of cuDNN software license agreement, you will see a list of available cuDNN software.

Once downloaded, untar the file, copy its ingredients to your cuda libraries and change permissions:

tar -xvf cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive.tar.xz
sudo cp -v cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive/include/cudnn.h /usr/local/cuda/include/
sudo cp -v cudnn-linux-x86_64-8.4.0.27_cuda11.6-archive/lib/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/include:$LD_LIBRARY_PATH' >> ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64
source ~/.bashrc

Now install Tensorflow with pip:

python3 -m pip install tensorflow==2.8.0

Then check if Tensorflow sees the GPUs by running:

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

You should see an output similar to this one:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
  • No labels