简体   繁体   中英

How to use GPU to run Keras Model.Predict()

Using the Tensorflow CIFAR CNN demonstration , I verified that my TF was properly using my GPU. TF used the GPU to run model.fit(), and it saw about 50% usage in HWiNFO64. However, if I then add this cell to the notebook, which uses the model to predict the label of images in the test set:

import numpy as np
for img in test_images:
    prediction = model.predict(np.expand_dims(img, axis=0)) # Here
    print(class_names[np.argmax(prediction)])

I see only 1% GPU usage (which is used by Chrome and other processes). Is there a way for me to run model.predict() on a GPU, or are there any alternatives where I can have a model output for a single input?

Your code is running on the GPU, it is a misconception to think that GPU utilization can tell you if code is running in the GPU or not.

The problem is that doing one predict call for each image is very inefficient, as almost no parallelism can be performed on the GPU, if you pass a whole array of images then it will increase GPU utilization as batches can be provided to the GPU and each image processed in parallel.

GPUs only accelerate specific workloads, so your only choice is to use more images in your call to predict .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM