How to install cudnn on windows

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

Arm

Arm, AMBA and Arm Powered are registered trademarks of Arm Limited. Cortex, MPCore and Mali are trademarks of Arm Limited. «Arm» is used to represent Arm Holdings plc; its operating company Arm Limited; and the regional subsidiaries Arm Inc.; Arm KK; Arm Korea Limited.; Arm Taiwan Limited; Arm France SAS; Arm Consulting (Shanghai) Co. Ltd.; Arm Germany GmbH; Arm Embedded Technologies Pvt. Ltd.; Arm Norway, AS and Arm Sweden AB.

HDMI

HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.

Blackberry/QNX

Copyright © 2020 BlackBerry Limited. All rights reserved.

Trademarks, including but not limited to BLACKBERRY, EMBLEM Design, QNX, AVIAGE, MOMENTICS, NEUTRINO and QNX CAR are the trademarks or registered trademarks of BlackBerry Limited, used under license, and the exclusive rights to such trademarks are expressly reserved.

Google

Android, Android TV, Google Play and the Google Play logo are trademarks of Google, Inc.

Trademarks

NVIDIA, the NVIDIA logo, and BlueField, CUDA, DALI, DRIVE, Hopper, JetPack, Jetson AGX Xavier, Jetson Nano, Maxwell, NGC, Nsight, Orin, Pascal, Quadro, Tegra, TensorRT, Triton, Turing and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

В очередной раз после переустановки Windows осознал, что надо накатить драйвера, CUDA, cuDNN, Tensorflow/Keras для обучения нейронных сетей.

Каждый раз для меня это оказывается несложной, но времязатратной операцией: найти подходящую комбинацию Tensorflow/Keras, CUDA, cuDNN и Python несложно, но вспоминаю про эти зависимости только в тот момент, когда при импорте Tensorflow вижу, что видеокарта не обнаружена и начинаю поиск нужной страницы в документации Tensorflow.

В этот раз ситуация немного усложнилась. Помимо установки Tensorflow мне потребовалось установить PyTorch. Со своими зависимостями и поддерживаемыми версиями Python, CUDA и cuDNN.

По итогам нескольких часов экспериментов решил, что надо зафиксировать все полезные ссылки в одном посте для будущего меня.

Краткий алгоритм установки Tensorflow и PyTorch

Примечание: Установить Tensorflow и PyTorch можно в одном виртуальном окружении, но в статье этого алгоритма нет.

Подготовка к установке

  1. Определить какая версия Python поддерживается Tensorflow и PyTorch (на момент написания статьи мне не удалось установить PyTorch в виртуальном окружении с Python 3.9.5)
  2. Для выбранной версии Python найти подходящие версии Tensorflow и PyTorch
  3. Определить, какие версии CUDA поддерживают выбранные ранее версии Tensorflow и PyTorch
  4. Определить поддерживаемую версию cuDNN для Tensorflow – не все поддерживаемые CUDA версии cuDNN поддерживаются Tensorflow. Для PyTorch этой особенности не заметил

Установка CUDA и cuDNN

  1. Скачиваем подходящую версию CUDA и устанавливаем. Можно установить со всеми значениями по умолчанию
  2. Скачиваем cuDNN, подходящую для выбранной версии Tensorflow (п.1.2). Для скачивания cuDNN потребуется регистрация на сайте NVidia. “Установка” cuDNN заключается в распакове архива и заменой существующих файлов CUDA на файлы из архива

Устанавливаем Tensorflow

  1. Создаём виртуальное окружение для Tensorflow c выбранной версией Python. Назовём его, например, py38tf
  2. Переключаемся в окружение py38tf и устанавливаем поддерживаемую версию Tensorflow pip install tensorflow==x.x.x
  3. Проверяем поддержку GPU командой
    python -c "import tensorflow as tf; print('CUDA available' if tf.config.list_physical_devices('GPU') else 'CUDA not available')"
    

Устанавливаем PyTorch

  1. Создаём виртуальное окружение для PyTorch c выбранной версией Python. Назовём его, например, py38torch
  2. Переключаемся в окружение py38torch и устанавливаем поддерживаемую версию PyTorch
  3. Проверяем поддержку GPU командой
python -c "import torch; print('CUDA available' if torch.cuda.is_available() else 'CUDA not available')"

В моём случае заработала комбинация:

  • Python 3.8.8
  • Драйвер NVidia 441.22
  • CUDA 10.1
  • cuDNN 7.6
  • Tensorflow 2.3.0
  • PyTorch 1.7.1+cu101

Tensorflow и PyTorch установлены в разных виртуальных окружениях.

Итого

Польза этой статьи будет понятна не скоро: систему переустанавливаю я не часто.

Если воспользуетесь этим алгоритмом и найдёте какие-то ошибки – пишите в комментарии

Если вам понравилась статья, то можете зайти в мой telegram-канал. В канал попадают небольшие заметки о Python, .NET, Go.

CUDA Install Guide

This is a must-read guide if you want to setup a new Deep Learning PC. This guide includes the installation of the following:

  • NVIDIA Driver
  • CUDA Toolkit
  • cuDNN
  • TensorRT

Recommendation

Debian installation method is recommended for all CUDA toolkit, cuDNN and TensorRT installation.

For PyTorch, CUDA 11.0 and CUDA 10.2 are recommended.

For TensorFlow, up to CUDA 10.2 are supported.

TensorRT is still not supported for Ubuntu 20.04. So, Ubuntu 18.04 is recommended

Install NVIDIA Driver

Windows

Windows Update automatically install and update NVIDIA Driver.

Linux

Update first:

sudo apt update
sudo apt upgrade

Check latest and recommended drivers:

sudo ubuntu-drivers devices

Install recommended driver automatically:

sudo ubuntu-drivers install

Or, Install specific driver version using:

sudo apt install nvidia-driver-xxx

Then reboot:

Verify the Installation

After reboot, verify using:

Install CUDA Toolkit

Installation Steps

  1. Go to https://developer.nvidia.com/cuda-toolkit-archive and choose your desire CUDA toolkit version that is compatible with the framework you want to use.
  2. Select your OS.
  3. Select your system architecture.
  4. Select your OS version.
  5. Select Installer Type and Follow the steps provided. (.exe on Windows and .run or .deb on Linux)

Post-Installation Actions

Windows exe CUDA Toolkit installation method automatically adds CUDA Toolkit specific Environment variables. You can skip the following section.

Before CUDA Toolkit can be used on a Linux system, you need to add CUDA Toolkit path to PATH variable.

Open a terminal and run the following command.

export PATH=/usr/local/cuda-11.1/bin${PATH:+:${PATH}}

or add this line to .bashrc file.

In addition, when using the runfile installation method, you also need to add LD_LIBRARY_PATH variable.

For 64-bit system,

export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

For 32-bit system,

export LD_LIBRARY_PATH=/usr/local/cuda-11.1/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Note: The above paths change when using a custom install path with the runfile installation method.

Verify the Installation

Check the CUDA Toolkit version with:

Install cuDNN

The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated lirbary of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization and activation layers.

  1. Go to https://developer.nvidia.com/cudnn and click «Download cuDNN».
  2. You need to sing in to proceed.
  3. Then, check «I Agree to the Terms…».
  4. Click on your desire cuDNN version compatible with your installed CUDA version. (If you don’t find desire cuDNN version, click on «Archived cuDNN Releases» and find your version. If you don’t know which version to install, latest cuDNN version is recommended).

Windows

  1. Choose «cuDNN Library for Windows (x86)» and download. (That is the only one available for Windows).

  2. Extract the downloaded zip file to a directory of your choice.

  3. Copy the following files into the CUDA Toolkit directory.

    a. Copy <extractpath>\cuda\bin\cudnn*.dll to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\bin.

    b. Copy <extractpath>\cuda\include\cudnn*.h to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\include.

    c. Copy <extractpath>\cuda\lib\x64\cudnn*.lib to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\lib\x64.

Linux

Download the 2 files named as:

  1. cuDNN Runtime Library for …
  2. cuDNN Developer Library for …

for your installed OS version.

Then, install the downloaded files with the following command:

sudo dpkg -i libcudnn8_x.x.x...deb
sudo dpkg -i libcudnn8-dev_x.x.x...deb

Install TensorRT

TensorRT is meant for high-performance inference on NVIDIA GPUs. TensorRT takes a trained network, which consists of a network definition and a set of trained parameters, and produces a highly optimized runtime engine that performs inference for that network.

  1. Go to https://developer.nvidia.com/tensorrt and click «Download Now».
  2. You need to sing in to proceed.
  3. Click on your desire TensorRT version. (If you don’t know which version to install, latest TensorRT version is recommended).
  4. Then, check «I Agree to the Terms…».
  5. Click on your desire TensorRT sub-version. (If you don’t know which version to install, latest version is recommended).

Windows

  1. Download «TensorRT 7.x.x for Windows10 and CUDA xx.x ZIP package» that matches CUDA version.
  2. Unzip the downloaded archive.
  3. Copy the DLL files from <extractpath>/lib to your CUDA installation directory C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vx.x\bin

Then install the uff, graphsurgeon and onnx_graphsurgeon wheel packages.

pip install <extractpath>\graphsurgeon\graphsurgeon-x.x.x-py2.py3-none-any.whl
pip install <extractpath>\uff\uff-x.x.x-py2.py3-none-any.whl
pip install <extractpath>\onnx_graphsurgeon\onnx_graphsurgeon-x.x.x-py2.py3-none-any.whl

Linux

Download «TensorRT 7.x.x for Ubuntu xx.04 and CUDA xx.x DEB local repo package» that matches your OS version, CUDA version and CPU architecture.

Then install with:

os="ubuntuxx04"
tag="cudax.x-trt7.x.x.x-ga-yyyymmdd"

sudo dpkg -i nv-tensorrt-repo-${os}-${tag}_1-1_amd64.deb
sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub

sudo apt update
sudo apt install -y tensorrt

If you plan to use TensorRT with TensorFlow, install this also:

sudo apt install uff-converter-tf

Verify the Installation

For Linux,

You should see packages related with TensorRT.

Upgrading TensorRT

Download and install the new version as if you didn’t install before. You don’t need to uninstall your previous version.

Uninstalling TensorRT

sudo apt purge "libvinfer*"
sudo apt purge graphsurgeon-tf onnx-graphsurgeon
sudo apt autoremove
sudo pip3 uninstall tensorrt
sudo pip3 uninstall uff
sudo pip3 uninstall graphsurgeon
sudo pip3 uninstall onnx-graphsurgeon

PyCUDA

PyCUDA is used within Python wrappers to access NVIDIA’s CUDA APIs.

Install PyCUDA with:

If you want to upgrade PyCUDA for newest CUDA version or if you change the CUDA version, you need to uninstall and reinstall PyCUDA.

For that purpose, do the following:

  1. Uninstall the existing PyCUDA.
  2. Upgrade CUDA.
  3. Install PyCUDA again.

References

  • Official CUDA Toolkit Installation
  • Official cuDNN Installation
  • Official TensorRT Installation

ПРОГРАММИРОВАНИЕ:

Как установить драйвер NVIDIA CUDA, CUDA Toolkit, CuDNN и TensorRT в Windows

Хорошие и простые руководства с пошаговыми инструкциями

Резюме:

В этой статье устанавливаются драйверы и программы, необходимые для использования графических процессоров NVIDIA для обучения моделей и выполнения пакетных выводов. Он загружает и устанавливает драйверы CUDA, CUDA Toolkits и обновления CUDA Toolkit. Он загружает, распаковывает и перемещает файлы CuDNN и TensorRT в каталог CUDA. Он также настраивает, создает и запускает образец BlackScholes для тестирования графического процессора.

Оглавление:

  1. Установить требования
  2. Установить драйвер CUDA
  3. Установить CUDA Toolkit 10
  4. Установить CUDA Toolkit 11
  5. Установить библиотеку CuDNN
  6. Установить библиотеку TensorRT
  7. Протестируйте GPU на образце CUDA

Приложение:

  1. Учебники: настройка искусственного интеллекта
  2. Учебники: курс искусственного интеллекта
  3. Учебники: репозитории искусственного интеллекта

Установите требования:

В этом разделе загружается и устанавливается Visual Studio с поддержкой C и C ++.

# open the powershell shell
1. press “⊞ windows”
2. enter “powershell” into the search bar
3. right-click "windows powershell"
4. click “run as administrator”
# download the visual studio 2019 installer
invoke-webrequest -outfile "$home\downloads\vsc.exe" -uri https://download.visualstudio.microsoft.com/download/pr/45dfa82b-c1f8-4c27-a5a0-1fa7a864ae21/9dd77a8d1121fd4382494e40840faeba0d7339a594a1603f0573d0013b0f0fa5/vs_Community.exe
# open the visual studio 2019 installer
invoke-item "$home\downloads\vsc.exe"
# install visual studio 2019
1. check “desktop development with c++”
2. click "install"

Установите драйвер CUDA:

В этом разделе загружается и устанавливается последняя версия драйвера CUDA на тот момент.

# download the cuda driver installer
invoke-webrequest -outfile "$home\downloads\cuda_driver.exe" -uri https://us.download.nvidia.com/Windows/471.68/471.68-desktop-win10-win11-64bit-international-nsd-dch-whql.exe
# open the cuda driver installer
invoke-item "$home\downloads\cuda_driver.exe"
# install the cuda driver
1. select “nvidia graphics driver”
2. click "agree & continue"
3. click "next"

Установите CUDA Toolkit 10:

В этом разделе загружается и устанавливается CUDA Toolkit 10 и обновления.

# download the cuda toolkit 10 installer
invoke-webrequest -outfile "$home\downloads\cuda_toolkit_10.exe" https://developer.download.nvidia.com/compute/cuda/10.2/Prod/network_installers/cuda_10.2.89_win10_network.exe
# open the cuda toolkit 10 installer
invoke-item "$home\downloads\cuda_toolkit_10.exe"
# install cuda toolkit 10
1. click "agree & continue"
2. click "next"
3. select custom (advanced)
4. click "next"
5. uncheck “nvidia geforce experience components”
6. uncheck “driver components”
7. uncheck “other components”
8. click "next"
# download the cuda 10 update 1installer
invoke-webrequest -outfile "$home\downloads\cuda_10_update_1.exe" https://developer.download.nvidia.com/compute/cuda/10.2/Prod/patches/1/cuda_10.2.1_win10.exe
# open the cuda 10 update 1 installer
invoke-item "$home\downloads\cuda_10_update_1.exe"
# install the cuda 10 update 1
1. click "agree & continue"
2. click "next"
# download the cuda 10 update 2 installer
invoke-webrequest -outfile "$home\downloads\cuda_10_update_2.exe" https://developer.download.nvidia.com/compute/cuda/10.2/Prod/patches/2/cuda_10.2.2_win10.exe
# open the cuda 10 update 2 installer
invoke-item "$home\downloads\cuda_10_update_2.exe"
# install the cuda 10 update 2
1. click "agree & continue"
2. click "next"

Установите CUDA Toolkit 11:

В этом разделе загружается и устанавливается CUDA Toolkit 11.

# download the cuda toolkit 11 installer
invoke-webrequest -outfile "$home\downloads\cuda_toolkit_11.exe" https://developer.download.nvidia.com/compute/cuda/11.4.1/network_installers/cuda_11.4.1_win10_network.exe
# open the cuda toolkit 11 installer
invoke-item "$home\downloads\cuda_toolkit_11.exe"
# install cuda toolkit 11
1. click "agree & continue"
2. click "next"
3. select custom (advanced)
4. click "next"
5. uncheck “nvidia geforce experience components”
6. uncheck “driver components”
7. uncheck “other components”
8. click "next"

Установите библиотеку CuDNN:

Этот раздел присоединяется к Программе разработчика NVIDIA и загружает библиотеку CuDNN, распаковывает и перемещает файлы в каталог CUDA.

# join the nvidia developer program
start-process iexplore "https://developer.nvidia.com/developer-program"
# download the cudnn library for cuda toolkit 10 
start-process iexplore https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.2/10.2_07062021/cudnn-10.2-windows10-x64-v8.2.2.26.zip
# unzip the cudnn library for cuda toolkit 10
expand-archive "$home\downloads\cudnn-10.2-windows10-x64-v8.2.2.26.zip" -destinationpath "$home\downloads\cudnn_cuda_toolkit_10\"
# move the dll files
move-item "$home\downloads\cudnn_cuda_toolkit_10\cuda\bin\cudnn*.dll" "c:\program files\nvidia gpu computing toolkit\cuda\v10.2\bin\"
# move the h files
move-item "$home\downloads\cudnn_cuda_toolkit_10\cuda\include\cudnn*.h" "c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\"
# move the lib files
move-item "$home\downloads\cudnn_cuda_toolkit_10\cuda\lib\x64\cudnn*.lib" "c:\program files\nvidia gpu computing toolkit\cuda\v10.2\lib\x64"
# download the cudnn library for cuda toolkit 11
start-process iexplore https://developer.nvidia.com/compute/machine-learning/cudnn/secure/8.2.2/11.4_07062021/cudnn-11.4-windows-x64-v8.2.2.26.zip
# unzip the cudnn library for cuda toolkit 11
expand-archive "$home\downloads\cudnn-11.4-windows-x64-v8.2.2.26.zip" -destinationpath "$home\downloads\cudnn_cuda_toolkit_11\"
# move the dll files
move-item "$home\downloads\cudnn_cuda_toolkit_11\cuda\bin\cudnn*.dll" "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\bin\"
# move the h files
move-item "$home\downloads\cudnn_cuda_toolkit_11\cuda\include\cudnn*.h" "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\include\"
# move the lib files
move-item "$home\downloads\cudnn_cuda_toolkit_11\cuda\lib\x64\cudnn*.lib" "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\lib\x64"

Установите библиотеку TensorRT:

Этот раздел загружает библиотеку TensorRT, распаковывает и перемещает файлы в каталог CUDA и устанавливает несколько необходимых программ на Python.

# download the tensorrt library for cuda toolkit 10
start-process iexplore https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.0.1/zip/tensorrt-8.0.1.6.windows10.x86_64.cuda-10.2.cudnn8.2.zip
# unzip the tensorrt library for cuda 10
expand-archive "$home\downloads\tensorrt-8.0.1.6.windows10.x86_64.cuda-10.2.cudnn8.2.zip" "$home\downloads\tensorrt_cuda_toolkit_10\"
# move the dll files
move-item "$home\downloads\tensorrt_cuda_toolkit_10\tensorrt-8.0.1.6\lib\*.dll" "c:\program files\nvidia gpu computing toolkit\cuda\v10.2\bin\"
# download the tensorrt library for cuda toolkit 11
start-process iexplore https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.0.1/zip/tensorrt-8.0.1.6.windows10.x86_64.cuda-11.3.cudnn8.2.zip
# unzip the tensorrt library for cuda 11
expand-archive "$home\downloads\tensorrt-8.0.1.6.windows10.x86_64.cuda-11.3.cudnn8.2.zip" "$home\downloads\tensorrt_cuda_toolkit_11\"
# move the dll files
move-item "$home\downloads\tensorrt_cuda_toolkit_11\tensorrt-8.0.1.6\lib\*.dll" "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\bin\"
# install graph surgeon
python -m pip install "$home\downloads\tensorrt_cuda_toolkit_11\tensorrt-8.0.1.6\graphsurgeon\graphsurgeon-0.4.5-py2.py3-none-any.whl"
# install onnx graph surgeon
python -m pip install "$home\downloads\tensorrt_cuda_toolkit_11\tensorrt-8.0.1.6\onnx_graphsurgeon\onnx_graphsurgeon-0.3.10-py2.py3-none-any.whl"
# install universal framework format
python -m pip install "$home\downloads\tensorrt_cuda_toolkit_11\tensorrt-8.0.1.6\uff\uff-0.6.9-py2.py3-none-any.whl"

Протестируйте графический процессор на примере CUDA:

В этом разделе настраивается, строится и запускается образец BlackScholes.

# open the visual studio file
start-process "c:\programdata\nvidia corporation\cuda samples\v11.4\4_finance\blackscholes\blackscholes_vs2019.sln"
# edit the linker input properties
1. click the "project" menu
2. click "properties"
3. double-click "linker"
4. click "input"
5. click "additional dependencies"
6. click the "down arrow" button
7. click "edit"
# add the cudnn library
1. type "cudnn.lib" at the bottom of the additional dependencies
2. click "ok"
# add the cuda toolkit 11 directory
1. click "cuda c/c++"
2. double-click "cuda toolkit custom dir"
3. enter "c:\program files\nvidia gpu computing toolkit\cuda\v11.4"
4. click "ok"
# build the sample
1. click the “build” menu
2. click “build solution”
# run the sample
cmd /k "c:\programdata\nvidia corporation\cuda samples\v11.4\bin\win64\debug\blackscholes.exe"

«Наконец, не забудьте подписаться и удерживать кнопку хлопка, чтобы получать регулярные обновления и помощь».

Приложение:

Этот блог существует, чтобы предоставить комплексные решения, ответить на ваши вопросы и ускорить ваш прогресс в области искусственного интеллекта. В нем есть все необходимое, чтобы настроить компьютер и пройти первую половину курса fastai. Он откроет вам самые современные репозитории в подполях искусственного интеллекта. Он также будет охватывать вторую половину курса фастая.

Учебники: настройка искусственного интеллекта

В этом разделе есть все, что нужно для настройки вашего компьютера.

# linux
01. install and manage multiple python versions
02. install the nvidia cuda driver, toolkit, cudnn, and tensorrt
03. install the jupyter notebook server
04. install virtual environments in jupyter notebook
05. install the python environment for ai and machine learning
06. install the fastai course requirements
# wsl 2
01. install windows subsystem for linux 2
02. install and manage multiple python versions
03. install the nvidia cuda driver, toolkit, cudnn, and tensorrt 
04. install the jupyter notebook home and public server
05. install virtual environments in jupyter notebook
06. install the python environment for ai and machine learning
07. install ubuntu desktop with a graphical user interface
08. install the fastai course requirements
# windows 10
01. install and manage multiple python versions
02. install the nvidia cuda driver, toolkit, cudnn, and tensorrt
03. install the jupyter notebook home and public server
04. install virtual environments in jupyter notebook
05. install the programming environment for ai and machine learning
# mac
01. install and manage multiple python versions
02. install the jupyter notebook server
03. install virtual environments in jupyter notebook
04. install the python environment for ai and machine learning
05. install the fastai course requirements

Учебники: курс искусственного интеллекта

Этот раздел содержит ответы на анкету в конце каждого урока.

# fastai course
01. chapter 1: your deep learning journey q&a
02. chapter 2: from model to production q&a
03. chapter 3: data ethics q&a
04. chapter 4: under the hood: training a digit classifier q&a
05. chapter 5: image classification q&a
06. chapter 6: other computer vision problems q&a
07. chapter 7: training a state-of-the-art model q&a
08. chapter 8: collaborative filtering deep dive q&a

Учебники: репозитории искусственного интеллекта

Этот раздел содержит современные репозитории в различных подполях.

# repositories related to audio
01. raise audio quality using nu-wave
02. change voices using maskcyclegan-vc
03. clone voices using real-time-voice-cloning toolbox
# repositories related to images
01. achieve 90% accuracy using facedetection-dsfd

Installing CUDA, cuDNN on Windows 10

This covers the installation of CUDA, cuDNN on Windows 10. This article below assumes that you have a CUDA-compatible GPU already installed on your PC.

Installation NVIDIA Driver (필수)

Visual Studio is a Prerequisite for CUDA Toolkit

Visual Studio Community is required for the installation of Nvidia CUDA Toolkit. If you attempt to download and install CUDA Toolkit for Windows without having first installed Visual Studio, you get a message for installation.

Step 1: Check If Graphic Driver is Installed

cudatoolkit에서 GPU에 접근하기 위해서는 특정 버전 이상의 그래픽카드 드라이버가 설치되어있어야 합니다.

먼저 자신의 드라이버 버전 확인을 위해 cmd 창이나 anaconda prompt를 열고 아래를 입력하십시오

결과와 같이 자신의 그래픽카드 드라이버 버전을 확인할 수 있습니다.

만약 nvidia-smi에도 아무 결과가 보이지 않으면 드라이버가 미설치된 상태입니다.

Step 2: Install Graphic Driver for your PC

항목

그래픽카드 정보

운영체제 정보

접근 방법

win키 → 장치 관리자

win키→ 시스템 (또는 내PC 우클릭 → 속성)

결과확인

image

image

  1. 1.

    다운로드 사이트 접속: https://www.nvidia.co.kr/Download/index.aspx?lang=kr

  • 확인된 본인의 PC (or 노트북)에 맞는 GPU 제품 및 운영체제를 선택합니다. 다운로드타입은 아무거나 선택하시면 됩니다.

  • 다운로드 타입(GRD or SD) 별로 드라이버를 찾을 수 있으며, 수업진행에는 모두 차질 없으니 검색되는 제품을 다운받으시면 됩니다.

  • 그래픽 드라이버를 설치합니다. GeForce Experience는 본 수업과는 크게 관련없으니 해제해도 좋습니다.

  • 다른 옵션은 초기 설정대로 진행 및 설치를 완료합니다.

Step 3. 그래픽 드라이버 설치 버전 확인

설치가 완료되면 anaconda prompt를 관리자 모드로 열고 아래를 입력하십시오. 설치된 드라이버 버전을 확인할 수 있습니다.

Install CUDA & CuDNN using Conda

Install CUDA and cuDNN with conda in Anaconda prompt.

CUDA=10.2.89, 2022-1 학기 기준

Here, it is assumed you have already installed Anaconda. If you do not have Anaconda installed, follow

How to Install Anaconda

Install in Specific Virtual Environment

It is recommended to install specific CUDA version in the selected Python environment.

Run Anaconda Prompt(admistration) and Activate conda virtual environment

[$ENV_NAME] is your environment name. e.g. conda activate py39

#conda activate [$ENV_NAME]

conda install c anaconda cudatoolkit==10.2.89 cudnn

  • How to install chrome on windows 10
  • How to install wget on windows
  • How to install apk on windows 11
  • How to install virtual machine on windows
  • How to install gcc on windows