简体   繁体   中英

How to activate the use of a GPU on AWS EC2 instance?

I am using AWS to train a CNN on a custom dataset. I launched a p2.xlarge instance, uploaded my (Python) scripts to the virtual machine, and I am running my code via the CLI.

I activated a virtual environment for TensorFlow(+Keras2) with Python3 (CUDA 10.0 and Intel MKL-DNN), which was a default option via AWS.

I am now running my code to train the.network, but it feels like the GPU is not 'activated'. The training goes just as fast (slow) as when I run it locally with a CPU.

This is the script that I am running:

https://github.com/AntonMu/TrainYourOwnYOLO/blob/master/2_Training/Train_YOLO.py

I also tried to alter it by putting with tf.device('/device:GPU: 0'): after the parser (line 142) and indenting everything underneath under there. However, this doesn't seem to have changed anything.

Any tips on how to activate the GPU (or check if the GPU is activated)?

Checkout this answer for listing available GPUs.

from tensorflow.python.client import device_lib

def get_available_gpus():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos if x.device_type == 'GPU']

You can also use CUDA to list the current device and, if necessary,set the device .

import torch

print(torch.cuda.is_available())
print(torch.cuda.current_device())

In the end it had to do with my tensorflow package. I had to uninstall tensorflow and install tensorflow-gpu. After that the GPU was automatically activated.

For documentation see: https://www.tensorflow.org/install/gpu

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM