So as of 2022-04, the tensorflow
package contains both CPU and GPU builds. To install a GPU build, search to see what’s available:
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
To install a specific version in an otherwise empty environment, you can use a command like:
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit
and cudnn
packages. You don’t need to manually check versions for compatibility, or manually download several gigabytes from Nvidia’s website, or register as a developer, as it says in other answers or on the official website.
After installation, confirm that it worked and it sees the GPU by running:
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install any TF 2.x version that’s built for GPU and that has dependencies compatible with Spyder and matplotlib’s dependencies, for instance:
λ conda install tensorflow=2*=gpu* spyder matplotlib
For me, this ended up installing a two year old GPU version of tensorflow:
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
I had previously been using the tensorflow-gpu
package, but that doesn’t work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it’s installed, it doesn’t actually install a gpu build of tensorflow or the CUDA dependencies:
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
I have installed CUDA 9.0 on my machine which has the NVIDIA GTX 1080 graphics cards. When I run the command nvcc --version
then I get:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:32_Central_Daylight_Time_2017
Cuda compilation tools, release 9.0, V9.0.176
But I have tried the steps from the TensorFlow official site to install TF with GPU support, but it still using the CPU.
I have tried pip
install and Anaconda
install, all was the same result. No one was able to detect GPU, then I have tried many other tutorials on the web, which they were able to detect the GPU, but I am not.
What can be the reason, is there any changing in the new GPU version of TF? If yes, then what is the latest documentation to install TF with GPU support, if not, then where I am doing wrong.
Thanks!
Update1: Tensorflow really wastes my time. Very annoying, at the first I decided to build TF from source, to use it with CUDA 10, but on both OS Windows 10 and Ubuntu 18.04 I was unable to build it successfully. So I gave up, then I decided to use with CUDA 9.0, which is not supported in Ubuntu 18.04, so I came back to windows, but even still the prebuilt library of TF not working, really annoying.
I don’t know why TF still using CUDA 9.0 which CUDA 10.0 already officially released, and TF still not supporting Python 3.7? amazing not? and the same thing with MS Build Tools 2015, which 2017 already exist, and many more tools. TF relays on old versions of the tools which make a lot of problem for some people that they must uninstall their new versions which still using, it is very annoying…
Update2: nvidia-smi
output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 417.71 Driver Version: 417.71 CUDA Version: 9.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 WDDM | 00000000:01:00.0 On | N/A |
| 27% 35C P8 8W / 180W | 498MiB / 8192MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1264 C+G Insufficient Permissions N/A |
| 0 2148 C+G ...0108.0_x64__8wekyb3d8bbwe\HxOutlook.exe N/A |
| 0 4360 C+G ...mmersiveControlPanel\SystemSettings.exe N/A |
| 0 7332 C+G C:\Windows\explorer.exe N/A |
| 0 7384 C+G ...t_cw5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 8488 C+G ...dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 9704 C+G ...osoft.LockApp_cw5n1h2txyewy\LockApp.exe N/A |
| 0 10588 C+G ...al\Google\Chrome\Application\chrome.exe N/A |
| 0 10904 C+G ...x64__8wekyb3d8bbwe\Microsoft.Photos.exe N/A |
| 0 12608 C+G ...DIA GeForce Experience\NVIDIA Share.exe N/A |
| 0 13000 C+G ...241.0_x64__8wekyb3d8bbwe\Calculator.exe N/A |
| 0 14668 C+G ...ng4wbp0\app\DellMobileConnectClient.exe N/A |
| 0 17628 C+G ...2.0_x64__8wekyb3d8bbwe\WinStore.App.exe N/A |
| 0 18060 C+G ...oftEdge_8wekyb3d8bbwe\MicrosoftEdge.exe N/A |
+-----------------------------------------------------------------------------+
As a data scientist you may have encountered a common issue while working with TensorFlow your GPU is not being detected This can be frustrating especially if you have invested in a powerful GPU to accelerate your deep learning models In this blog post we will explore the reasons why TensorFlow may not be detecting your GPU and provide stepbystep instructions to troubleshoot and resolve this issue
⚠ content generated by AI for experimental purposes only
As a data scientist, you may have encountered a common issue while working with TensorFlow — your GPU is not being detected. This can be frustrating, especially if you have invested in a powerful GPU to accelerate your deep learning models. In this blog post, we will explore the reasons why TensorFlow may not be detecting your GPU, and provide step-by-step instructions to troubleshoot and resolve this issue.
Why is TensorFlow not detecting my GPU?
There could be several reasons why TensorFlow is not detecting your GPU. Here are a few common issues:
-
CUDA toolkit not installed: TensorFlow requires the NVIDIA CUDA toolkit to be installed on your system in order to use the GPU. If the CUDA toolkit is not installed, TensorFlow will default to using the CPU for computations.
-
Incompatible GPU: TensorFlow requires a GPU with a minimum compute capability of 3.0. If your GPU does not meet this requirement, TensorFlow will not be able to use it.
-
TensorFlow not compiled with GPU support: If you installed TensorFlow from pip or conda, it may not have been compiled with GPU support. In this case, you will need to build TensorFlow from source with GPU support enabled.
-
GPU driver issues: If the GPU driver is not installed or is outdated, TensorFlow may not be able to detect the GPU.
How to troubleshoot TensorFlow not detecting GPU
Now that we know the common reasons why TensorFlow may not be detecting your GPU, let’s dive into the troubleshooting steps.
Step 1: Check your GPU compute capability
The first step is to check if your GPU meets the minimum compute capability required by TensorFlow. You can find the compute capability of your GPU on NVIDIA’s website. If your GPU does not meet the minimum requirement, you will need to upgrade your GPU.
Step 2: Check if the CUDA toolkit is installed
The next step is to check if the CUDA toolkit is installed on your system. You can do this by running the following command:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
nvcc --version
If the command is not found, it means that the CUDA toolkit is not installed. You can download and install the CUDA toolkit from NVIDIA’s website.
Step 3: Check if TensorFlow is compiled with GPU support
If you installed TensorFlow from pip or conda, it may not have been compiled with GPU support. To check if TensorFlow is compiled with GPU support, you can run the following command:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
python -c "import tensorflow as tf; tf.config.list_physical_devices('GPU')"
If the output is an empty list, it means that TensorFlow is not compiled with GPU support. In this case, you will need to build TensorFlow from source with GPU support enabled.
Step 4: Check if the GPU driver is installed and up-to-date
If the above steps did not resolve the issue, it may be a GPU driver issue. You can check if the GPU driver is installed and up-to-date by running the following command:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
nvidia-smi
If the command is not found, it means that the GPU driver is not installed. You can download and install the GPU driver from NVIDIA’s website. If the GPU driver is installed, you can check if it is up-to-date by comparing the driver version with the latest version available on NVIDIA’s website.
Step 5: Verify TensorFlow is using the GPU
Finally, after making sure that all the above steps have been followed, we can verify that TensorFlow is using the GPU by running the following code:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
If the output is greater than 0, it means that TensorFlow is now using the GPU.
Conclusion
In this blog post, we explored the common reasons why TensorFlow may not be detecting your GPU and provided step-by-step instructions to troubleshoot and resolve the issue. By following these steps, you can accelerate your deep learning models by using the full potential of your GPU. Happy coding!
About Saturn Cloud
Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Join today and get 150 hours of free compute per month.
If TensorFlow doesn’t detect your GPU, it will default to the CPU, which means when doing heavy training jobs, these will take a really long time to complete. This is most likely because the CUDA and CuDNN drivers are not being correctly detected in your system.
I am assuming that you have already installed Tensorflow with GPU support. If you haven’t check this article:
To check that GPU support is enabled, run the following from a terminal:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
If your GPU is detected you should see something similar to this output:
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
But if you are unlucky, then you will instead get the following output:
[]
Or you might get an obscure error like the below:
2022-05-24 20:29:24.352218: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: UNKNOWN ERROR (100)
2022-05-24 20:29:24.352261: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (c37259b3e9a1): /proc/driver/nvidia/version does not exist
In both cases, Tensorflow is not detecting your Nvidia GPU. This can be for a variety of reasons:
- Nvidia Driver not installed
- CUDA not installed, or incompatible version
- CuDNN not installed or incompatible version
- Tensorflow running on Docker but without Nvidia drivers installed in host, or Nvidia Docker not installed
- etc
Now that you are sure that Tensorflow is not detecting your GPU, it’s time to install Tensorflow correctly. Check the article below:
Recommended Courses for Data Science
- Learn Python for Beginners 👉🏼 Python for Everybody
- Learn Deep Learning with Andrew Ng 👉🏼 Neural Networks and Deep Learning by Andrew Ng
- Learn Data Science with Coursera Plus 👉🏼 Coursera Plus For Data Science
Problem Description:
I’ve tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn’t support it).
When I execute device_lib.list_local_devices()
, there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well.
I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460
Solution – 1
When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.
https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux
You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU.
Solution – 2
I came across this same issue in jupyter notebooks. This could be an easy fix.
$ pip uninstall tensorflow
$ pip install tensorflow-gpu
You can check if it worked with:
tf.test.gpu_device_name()
Update 2020
It seems like tensorflow 2.0+ comes with gpu capabilities therefore
pip install tensorflow
should be enough
Solution – 3
The following worked for me, hp laptop. I have a Cuda Compute capability
(version) 3.0 compatible Nvidia card. Windows 7.
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe uninstall tensorflow-gpu
pip3.6.exe install tensorflow-gpu
Solution – 4
If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list
) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow
and install keras-gpu instead (conda install -c anaconda keras-gpu
. This will install everything you need to run your machine learning codes in GPU. Cheers!
P.S. You should check first if you have installed the drivers correctly using nvidia-smi
. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:Program FilesNVIDIA CorporationNVSMI
Solution – 5
Summary:
- check if tensorflow sees your GPU (optional)
- check if your videocard can work with tensorflow (optional)
- find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version
- install CUDA Toolkit
- install cuDNN SDK
- pip uninstall tensorflow; pip install tensorflow-gpu
- check if tensorflow sees your GPU
*
source – https://www.tensorflow.org/install/gpu
Detailed instruction:
-
check if tensorflow sees your GPU (optional)
from tensorflow.python.client import device_lib def get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos] print(get_available_devices()) # my output was => ['/device:CPU:0'] # good output must be => ['/device:CPU:0', '/device:GPU:0']
-
check if your card can work with tensorflow (optional)
- my PC: GeForce GTX 1060 notebook (driver version – 419.35), windows 10, jupyter notebook
-
tensorflow needs Compute Capability 3.5 or higher. (https://www.tensorflow.org/install/gpu#hardware_requirements)
-
https://developer.nvidia.com/cuda-gpus
- select “CUDA-Enabled GeForce Products”
- result – “GeForce GTX 1060 Compute Capability = 6.1”
- my card can work with tf!
-
find versions of CUDA Toolkit and cuDNN SDK, that you need
a) find your tf version
import sys print (sys.version) # 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] import tensorflow as tf print(tf.__version__) # my output was => 1.13.1
b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version
https://www.tensorflow.org/install/source#linux * it is written for linux, but worked in my case see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4
-
install CUDA Toolkit
a) install CUDA Toolkit 10.0
https://developer.nvidia.com/cuda-toolkit-archive select: CUDA Toolkit 10.0 and download base installer (2 GB) installation settings: select only CUDA (my installation path was: D:Programsx64NvidiaCuda_v_10_0Development)
b) add environment variables:
system variables / path must have: D:Programsx64NvidiaCuda_v_10_0Developmentbin D:Programsx64NvidiaCuda_v_10_0Developmentlibnvvp D:Programsx64NvidiaCuda_v_10_0DevelopmentextrasCUPTIlibx64 D:Programsx64NvidiaCuda_v_10_0Developmentinclude
-
install cuDNN SDK
a) download cuDNN SDK v7.4
https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple) select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0"
b) add path to ‘bin’ folder into “environment variables / system variables / path”:
D:Programsx64Nvidiacudnn_for_cuda_10_0bin
-
pip uninstall tensorflow
pip install tensorflow-gpu -
check if tensorflow sees your GPU
- restart your PC - print(get_available_devices()) - # now this code should return => ['/device:CPU:0', '/device:GPU:0']
Solution – 6
In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:
pip uninstall tensorflow-gpu==1.14
pip install tensorflow-gpu==1.14
Solution – 7
I experienced the same problem on my Windows OS. I followed tensorflow’s instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above – with no success.
What solved my issue was to update my GPU drivers. You can update them via:
- Pressing windows-button + r
- Entering
devmgmt.msc
- Right-Clicking on «Display adapters» and clicking on the «Properties» option
- Going to the «Driver» tab and selecting «Updating Driver».
- Finally, click on «Search automatically for updated driver software»
- Restart your machine and run the following check again:
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
[x.name for x in local_device_protos]
Sample output:
2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189
pciBusID: 0000:01:00.0
2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)
>>> [x.name for x in local_device_protos]
['/device:CPU:0', '/device:GPU:0']
Solution – 8
So as of 2022-04, the tensorflow
package contains both CPU and GPU builds. To install a GPU build, search to see what’s available:
λ conda search tensorflow
Loading channels: done
# Name Version Build Channel
tensorflow 0.12.1 py35_1 conda-forge
tensorflow 0.12.1 py35_2 conda-forge
tensorflow 1.0.0 py35_0 conda-forge
…
tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main
tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main
tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main
tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main
tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main
tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main
You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.
To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:
λ conda search tensorflow=2*=gpu*
Loading channels: done
# Name Version Build Channel
tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main
tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main
tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main
tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main
tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main
tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main
tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main
tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main
tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main
tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main
To install a specific version in an otherwise empty environment, you can use a command like:
λ conda activate tf
(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0
…
The following NEW packages will be INSTALLED:
_tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu
…
cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2
cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0
…
tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0
tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0
…
As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit
and cudnn
packages. You don’t need to manually check versions for compatibility, or manually download several gigabytes from Nvidia’s website, or register as a developer, as it says in other answers or on the official website.
After installation, confirm that it worked and it sees the GPU by running:
λ python
Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
>>> tf.config.list_physical_devices()
[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.
This tries to install any TF 2.x version that’s built for GPU and that has dependencies compatible with Spyder and matplotlib’s dependencies, for instance:
λ conda install tensorflow=2*=gpu* spyder matplotlib
For me, this ended up installing a two year old GPU version of tensorflow:
matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1
spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1
tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0
I had previously been using the tensorflow-gpu
package, but that doesn’t work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it’s installed, it doesn’t actually install a gpu build of tensorflow or the CUDA dependencies:
λ conda list
…
cookiecutter 1.7.2 pyhd3eb1b0_0
cryptography 3.4.8 py38h71e12ea_0
cycler 0.11.0 pyhd3eb1b0_0
dataclasses 0.8 pyh6d0b6a4_7
…
tensorflow 2.3.0 mkl_py38h8557ec7_0
tensorflow-base 2.3.0 eigen_py38h75a453f_0
tensorflow-estimator 2.6.0 pyh7b7c402_0
tensorflow-gpu 2.3.0 he13fc11_0
Solution – 9
I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was
conda install cudatoolkit==11.2
pip install tensorflow-gpu==2.8.0
Although I’ve cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0
was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH
to point to your anaconda3/envs/<your_tensorflow_environment>/lib
with this command
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib
Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda’s official website.
Solution – 10
I had a problem because I didn’t specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide:
Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin
Before that, I read most of the answers to this and similar questions. I followed @AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer.
The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it’s not necessary, but I didn’t want to see how pip messed up versions at 2 am):
pip uninstall keras
pip uninstall tensorflow-io-gcs-filesystem
pip uninstall tensorflow-estimator
pip uninstall tensorflow
pip uninstall Keras-Preprocessing
pip uninstall tensorflow-intel
because I wanted only packages required for the old version, and I didn’t do it for all required packages for 2.11 version. After that I installed tensorflow 2.10:
pip install tensorflow<2.11
and it worked.
I used this code to check if GPU is visible:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))