Your graphics card does not support CUDA 9.0.
Since I’ve seen a lot of questions that refer to issues like this I’m writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer.
The system requirements to use PyTorch with CUDA are as follows:
- Your graphics card must support the required version of CUDA
- Your graphics card driver must support the required version of CUDA
- The PyTorch binaries must be built with support for the compute capability of your graphics card
Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library.
1. How to check if your GPU/graphics card supports a particular CUDA version
First, identify the model of your graphics card.
Before moving forward ensure that you’ve got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA.
NVIDIA doesn’t do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported
- Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1.
- In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1.
If your card doesn’t support the required CUDA version then see the options in section 4 of this answer.
Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA.
2. How to check if your GPU/graphics driver supports a particular CUDA version
The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA.
First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA’s website.
If you’ve installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the CUDA release notes. In rare cases I’ve heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn’t required.
If you can’t, or don’t want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows:
On Windows
- Determine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/)
Right-click on your desktop and select NVIDIA Control Panel. From the
NVIDIA Control Panel menu, select Help > System Information. The
driver version is listed at the top of the Details window. For more
advanced users, you can also get the driver version number from the
Windows Device Manager. Right-click on your graphics device under
display adapters and then select Properties. Select the Driver tab and
read the Driver version. The last 5 digits are the NVIDIA driver
version number.
- Visit the CUDA release notes and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA.
On Linux/OS X
Run the following command in a terminal window
nvidia-smi
This should result in something like the following
Sat Apr 4 15:31:57 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A |
| 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1138 G /usr/lib/xorg/Xorg 300MiB |
| 0 2550 G /usr/bin/compiz 189MiB |
| 0 5735 G /usr/lib/firefox/firefox 5MiB |
| 0 7073 G /usr/lib/firefox/firefox 5MiB |
+-----------------------------------------------------------------------------+
Driver Version: ###.##
is your graphic driver version. In the example above the driver version is 435.21
.
CUDA Version: ##.#
is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1.
Note: The CUDA Version
displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with.
To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the CUDA release notes page.
3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability
Even if your graphics card supports the required version of CUDA then it’s possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped.
First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case.
The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter
>>> import torch
>>> torch.zeros(1).cuda()
If you get an error message that reads
Found GPU0 XXXXX which is of cuda capability #.#.
PyTorch no longer supports this GPU because it is too old.
then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go.
Update If you’re installing an old version of PyTorch on a system with a newer GPU then it’s possible that the old PyTorch release wasn’t compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities.
4. Conclusion
If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don’t support your compute capability (section 3) then your options are
- Compile PyTorch from source with support for your compute capability (see here)
- Install PyTorch without CUDA support (CPU-only)
- Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries
- Upgrade your graphics card
If your graphics card doesn’t support the required version of CUDA (section 1) then your options are
- Install PyTorch without CUDA support (CPU-only)
- Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don’t support your compute capability)
- Upgrade your graphics card
Your graphics card does not support CUDA 9.0.
Since I’ve seen a lot of questions that refer to issues like this I’m writing a broad answer on how to check if your system is compatible with CUDA, specifically targeted at using PyTorch with CUDA support. Various circumstance-dependent options for resolving issues are described in the last section of this answer.
The system requirements to use PyTorch with CUDA are as follows:
- Your graphics card must support the required version of CUDA
- Your graphics card driver must support the required version of CUDA
- The PyTorch binaries must be built with support for the compute capability of your graphics card
Note: If you install pre-built binaries (using either pip or conda) then you do not need to install the CUDA toolkit or runtime on your system before installing PyTorch with CUDA support. This is because PyTorch, unless compiled from source, is always delivered with a copy of the CUDA library.
1. How to check if your GPU/graphics card supports a particular CUDA version
First, identify the model of your graphics card.
Before moving forward ensure that you’ve got an NVIDIA graphics card. AMD and Intel graphics cards do not support CUDA.
NVIDIA doesn’t do a great job of providing CUDA compatibility information in a single location. The best resource is probably this section on the CUDA Wikipedia page. To determine which versions of CUDA are supported
- Locate your graphics card model in the big table and take note of the compute capability version. For example, the GeForce 820M compute capability is 2.1.
- In the bullet list preceding the table check to see if the required CUDA version is supported by the compute capability of your graphics card. For example, CUDA 9.2 is not supported for compute compatibility 2.1.
If your card doesn’t support the required CUDA version then see the options in section 4 of this answer.
Note: Compute capability refers to the computational features supported by your graphics card. Newer versions of the CUDA library rely on newer hardware features, which is why we need to determine the compute capability in order to determine the supported versions of CUDA.
2. How to check if your GPU/graphics driver supports a particular CUDA version
The graphics driver is the software that allows your operating system to communicate with your graphics card. Since CUDA relies on low-level communication with the graphics card you need to have an up-to-date driver in order use the latest versions of CUDA.
First, make sure you have an NVIDIA graphics driver installed on your system. You can acquire the newest driver for your system from NVIDIA’s website.
If you’ve installed the latest driver version then your graphics driver probably supports every CUDA version compatible with your graphics card (see section 1). To verify, you can check Table 2 in the CUDA release notes. In rare cases I’ve heard of the latest recommended graphics drivers not supporting the latest CUDA releases. You should be able to get around this by installing the CUDA toolkit for the required CUDA version and selecting the option to install compatible drivers, though this usually isn’t required.
If you can’t, or don’t want to upgrade the graphics driver then you can check to see if your current driver supports the specific CUDA version as follows:
On Windows
- Determine your current graphics driver version (Source https://www.nvidia.com/en-gb/drivers/drivers-faq/)
Right-click on your desktop and select NVIDIA Control Panel. From the
NVIDIA Control Panel menu, select Help > System Information. The
driver version is listed at the top of the Details window. For more
advanced users, you can also get the driver version number from the
Windows Device Manager. Right-click on your graphics device under
display adapters and then select Properties. Select the Driver tab and
read the Driver version. The last 5 digits are the NVIDIA driver
version number.
- Visit the CUDA release notes and scroll down to Table 2. Use this table to verify your graphics driver is new enough to support the required version of CUDA.
On Linux/OS X
Run the following command in a terminal window
nvidia-smi
This should result in something like the following
Sat Apr 4 15:31:57 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21 Driver Version: 435.21 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 206... Off | 00000000:01:00.0 On | N/A |
| 0% 35C P8 16W / 175W | 502MiB / 7974MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1138 G /usr/lib/xorg/Xorg 300MiB |
| 0 2550 G /usr/bin/compiz 189MiB |
| 0 5735 G /usr/lib/firefox/firefox 5MiB |
| 0 7073 G /usr/lib/firefox/firefox 5MiB |
+-----------------------------------------------------------------------------+
Driver Version: ###.##
is your graphic driver version. In the example above the driver version is 435.21
.
CUDA Version: ##.#
is the latest version of CUDA supported by your graphics driver. In the example above the graphics driver supports CUDA 10.1 as well as all compatible CUDA versions before 10.1.
Note: The CUDA Version
displayed in this table does not indicate that the CUDA toolkit or runtime are actually installed on your system. This just indicates the latest version of CUDA your graphics driver is compatible with.
To be extra sure that your driver supports the desired CUDA version you can visit Table 2 on the CUDA release notes page.
3. How to check if a particular version of PyTorch is compatible with your GPU/graphics card compute capability
Even if your graphics card supports the required version of CUDA then it’s possible that the pre-compiled PyTorch binaries were not compiled with support for your compute capability. For example, in PyTorch 0.3.1 support for compute capability <= 5.0 was dropped.
First, verify that your graphics card and driver both support the required CUDA version (see Sections 1 and 2 above), the information in this section assumes that this is the case.
The easiest way to check if PyTorch supports your compute capability is to install the desired version of PyTorch with CUDA support and run the following from a python interpreter
>>> import torch
>>> torch.zeros(1).cuda()
If you get an error message that reads
Found GPU0 XXXXX which is of cuda capability #.#.
PyTorch no longer supports this GPU because it is too old.
then that means PyTorch was not compiled with support for your compute capability. If this runs without issue then you should be good to go.
Update If you’re installing an old version of PyTorch on a system with a newer GPU then it’s possible that the old PyTorch release wasn’t compiled with support for your compute capability. Assuming your GPU supports the version of CUDA used by PyTorch, then you should be able to rebuild PyTorch from source with the desired CUDA version or upgrade to a more recent version of PyTorch that was compiled with support for the newer compute capabilities.
4. Conclusion
If your graphics card and driver support the required version of CUDA (section 1 and 2) but the PyTorch binaries don’t support your compute capability (section 3) then your options are
- Compile PyTorch from source with support for your compute capability (see here)
- Install PyTorch without CUDA support (CPU-only)
- Install an older version of the PyTorch binaries that support your compute capability (not recommended as PyTorch 0.3.1 is very outdated at this point). AFAIK compute capability older than 3.X has never been supported in the pre-built binaries
- Upgrade your graphics card
If your graphics card doesn’t support the required version of CUDA (section 1) then your options are
- Install PyTorch without CUDA support (CPU-only)
- Install an older version of PyTorch that supports a CUDA version supported by your graphics card (still may require compiling from source if the binaries don’t support your compute capability)
- Upgrade your graphics card
@SimplyLucKey Please run
python -m torch.utils.collect_env
and post its output here.
Oops sorry I think I uninstalled pytorch so I reinstalled it again and it worked this time.
However I have been running into an error with memory allocation and they’re only a few MB big (I have 8 GB RAM).
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<timed exec> in <module>
<ipython-input-19-334bcbb678d7> in train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples)
10 targets = i['targets'].to(device)
11
---> 12 outputs = model(input_ids=input_ids, attention_mask=attention_mask)
13 _, preds = torch.max(outputs, dim=1) # process with the highest probability
14 loss = loss_fn(outputs, targets)
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-16-80e63a2794b9> in forward(self, input_ids, attention_mask)
7
8 def forward(self, input_ids, attention_mask):
----> 9 _, pooled_output = self.bert(input_ids=input_ids, attention_mask=attention_mask, return_dict=False)
10 output = self.drop(pooled_output)
11 return self.out(output)
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
956 past_key_values_length=past_key_values_length,
957 )
--> 958 encoder_outputs = self.encoder(
959 embedding_output,
960 attention_mask=extended_attention_mask,
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
557 )
558 else:
--> 559 layer_outputs = layer_module(
560 hidden_states,
561 attention_mask,
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions)
493 present_key_value = present_key_value + cross_attn_present_key_value
494
--> 495 layer_output = apply_chunking_to_forward(
496 self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
497 )
~\anaconda3\lib\site-packages\transformers\modeling_utils.py in apply_chunking_to_forward(forward_fn, chunk_size, chunk_dim, *input_tensors)
1785 return torch.cat(output_chunks, dim=chunk_dim)
1786
-> 1787 return forward_fn(*input_tensors)
~\anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in feed_forward_chunk(self, attention_output)
505
506 def feed_forward_chunk(self, attention_output):
--> 507 intermediate_output = self.intermediate(attention_output)
508 layer_output = self.output(intermediate_output, attention_output)
509 return layer_output
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in forward(self, hidden_states)
408
409 def forward(self, hidden_states):
--> 410 hidden_states = self.dense(hidden_states)
411 hidden_states = self.intermediate_act_fn(hidden_states)
412 return hidden_states
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~\anaconda3\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
-> 1692 output = input.matmul(weight.t())
1693 if bias is not None:
1694 output += bias
RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 4.00 GiB total capacity; 2.66 GiB already allocated; 27.74 MiB free; 2.88 GiB reserved in total by PyTorch)
Loading
If you’re a data scientist or software engineer working with deep learning frameworks, you’re likely familiar with PyTorch. PyTorch is a popular open-source machine learning library that provides a flexible and efficient platform for building and training deep neural networks. It’s known for its ease of use, dynamic computation graphs, and support for both CPU and GPU acceleration.
⚠ content generated by AI for experimental purposes only
How to Troubleshoot PyTorch’s torch.cuda.is_available() Returning False in Windows 10
If you’re a data scientist or software engineer working with deep learning frameworks, you’re likely familiar with PyTorch. PyTorch is a popular open-source machine learning library that provides a flexible and efficient platform for building and training deep neural networks. It’s known for its ease of use, dynamic computation graphs, and support for both CPU and GPU acceleration.
One of the key benefits of using PyTorch is its ability to leverage GPU acceleration to speed up training and inference. However, if you’re running PyTorch on Windows 10 and you’ve installed a compatible CUDA driver and GPU, you may encounter an issue where torch.cuda.is_available()
returns False
. This can be frustrating, as it means that PyTorch is not able to use your GPU for acceleration.
In this article, we’ll explore some common causes of this issue and provide some troubleshooting steps to help you get PyTorch running on your GPU.
Check Your CUDA Version
The first thing to check is whether your version of CUDA is compatible with your GPU and PyTorch. PyTorch has specific requirements for the version of CUDA that it supports, and using an incompatible version can cause torch.cuda.is_available()
to return False
.
To check your CUDA version, you can run the following command in a command prompt or PowerShell window:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
nvcc --version
This will display the version of CUDA that is installed on your system. You can then check the PyTorch documentation to see which versions of CUDA are supported by your version of PyTorch. If your version of CUDA is not supported, you will need to install a compatible version.
Check Your GPU Drivers
Another possible cause of torch.cuda.is_available()
returning False
is outdated or incompatible GPU drivers. PyTorch requires a compatible NVIDIA GPU driver to be installed in order to use GPU acceleration. If your driver is outdated or incompatible, PyTorch may not be able to detect your GPU.
To check your GPU driver version, you can run the following command in a command prompt or PowerShell window:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
nvidia-smi
This will display information about your NVIDIA GPU, including the driver version. You can then check the NVIDIA website to see if there is a newer version of the driver available for your GPU. If there is, you should download and install it.
Check Your Environment Variables
PyTorch relies on several environment variables to locate the CUDA libraries and other dependencies. If these environment variables are not set correctly, PyTorch may not be able to detect your GPU.
To check your environment variables, you can open the Start menu and search for “Environment Variables”. This will open the System Properties window, where you can click the “Environment Variables” button.
In the Environment Variables window, you should see a list of system variables and user variables. Look for the following variables:
CUDA_HOME
: This should be set to the path where CUDA is installed, such asC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0
.PATH
: This should include the path to the CUDA libraries, such as%CUDA_HOME%\bin
.CUDNN_HOME
: This should be set to the path where the cuDNN library is installed, such asC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\cuDNN\bin
.PATH
: This should include the path to the cuDNN library, such as%CUDNN_HOME%\bin
.
If any of these variables are missing or set incorrectly, you should update them to reflect the correct paths.
Check Your PyTorch Installation
Finally, it’s possible that there is an issue with your PyTorch installation itself. If PyTorch is not installed correctly or is missing dependencies, it may not be able to detect your GPU.
To check your PyTorch installation, you can run the following command in a Python shell:
⚠ This code is experimental content and was generated by AI. Please refer to this code as experimental only since we cannot currently guarantee its validity
import torch
print(torch.__version__)
This will display the version of PyTorch that is installed on your system. You can then check the PyTorch documentation to see if there are any known issues with your version of PyTorch. If there are, you may need to update or reinstall PyTorch.
Conclusion
If you’re experiencing issues with torch.cuda.is_available()
returning False
in Windows 10, there are several possible causes that you should investigate. By checking your CUDA version, GPU drivers, environment variables, and PyTorch installation, you can identify and resolve the issue so that you can take advantage of GPU acceleration in PyTorch.
Remember to always check the documentation for PyTorch and your GPU drivers to ensure compatibility and avoid any potential issues. With the right setup and troubleshooting steps, you can unlock the full potential of PyTorch for your deep learning projects.
About Saturn Cloud
Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Join today and get 150 hours of free compute per month.