简体   繁体   中英

How to make custom code in python utilize GPU while using Pytorch tensors and matrice functions

I've created a CNN from scratch only using Pytorch tensors and matrix operation functions in the hope of utilizing GPU. To my surprise, the GPU stays 0% utilized and my training doesn't seem to be faster than running on my cpu.

Before Training:

在此处输入图像描述 在此处输入图像描述

While Training:

在此处输入图像描述 在此处输入图像描述

I've double checked whether CUDA is available and have installed it already.

Graphics card: Nvidia GEFORCE 2070 SUPER

Processor: Intel i5 10400

Coding Environment: Jupyter Notebook

Cuda & Cudnn Version: 11.0

Pytorch version: 1.6.0

You have to move your model and data to GPU using

model.cuda()
# and 
x = x.cuda()
y = y.cuda()

You seem to be doing this with-in the calls of forward and backwards. To make sure the model is going on to GPU, monitor the GPU usage continually using shell command

watch -n 5 nvidia-smi

I've created a CNN from scratch only using Pytorch tensors and matrix operation functions in the hope of utilizing GPU. To my surprise, the GPU stays 0% utilized and my training doesn't seem to be faster than running on my cpu.

Before Training:

在此处输入图像描述 在此处输入图像描述

While Training:

在此处输入图像描述 在此处输入图像描述

I've double checked whether CUDA is available and have installed it already.

Graphics card: Nvidia GEFORCE 2070 SUPER

Processor: Intel i5 10400

Coding Environment: Jupyter Notebook

Cuda & Cudnn Version: 11.0

Pytorch version: 1.6.0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM