简体   繁体   中英

Tensorflow 2.0 train model on single GPU

I want to train a sequential tensorflow (version 2.3.0) model on a single NVIDIA graphic card (RTX 2080 super). I am using the following code snippet to build and train the model. However, everytime I am running this code I do not see any GPU utilization. Any suggestion how to modify my code so I can run it on 1 GPU?

strategy = tf.distribute.OneDeviceStrategy(device="/GPU:0")
with strategy.scope():
    num_classes=len(pd.unique(cats.No))
    model = BuildModel((image_height, image_width, 3), num_classes)
    model.summary()
    model=train_model(model,valid_generator,train_generator,EPOCHS,BATCH_SIZE)

run the code below to see if tensorflow detects your GPU.

import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.test.is_gpu_available()
!python --version

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM