簡體   English   中英

如何將 GPU 與 Keras 一起使用?

[英]How can I use a GPU with Keras?

我的問題是,我正在嘗試在 google colab 中使用 Keras 訓練卷積神經網絡,它能夠區分狗和貓,但是在進入訓練階段時,我的 model 需要很長時間才能訓練,我想要了解如何以正確的方式使用 GPU 以縮短培訓時間。


from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('/content/drive/MyDrive/Colab Notebooks/files/dataset_CNN/training_set',
                                                 target_size = (64, 64),
                                                 batch_size = 32,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('/content/drive/MyDrive/Colab Notebooks/files/dataset_CNN/test_set',
                                            target_size = (64, 64),
                                            batch_size = 32,
                                            class_mode = 'binary')
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('GPU device not found')
with tf.device('/device:GPU:0'):#I tried to put this part that I found by researching on the internet 
  
  classifier = Sequential()
  classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
  classifier.add(MaxPooling2D(pool_size = (2, 2)))
  classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
  classifier.add(MaxPooling2D(pool_size = (2, 2)))
  classifier.add(Flatten())
  classifier.add(Dense(units = 128, activation = 'relu'))
  classifier.add(Dense(units = 1, activation = 'sigmoid'))
  classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
  classifier.fit(training_set,
                         steps_per_epoch = 8000,
                         epochs = 25,
                         validation_data = test_set,
                         validation_steps = 2000)

之前我沒有把這部分代碼放在"with tf.device('/device:GPU:0')",我在inte.net的一個例子里看到了這部分代碼,但是還是慢訓練。 我已經使用以下方法檢查了可用的 GPU:


device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('GPU device not found')


到 Google Colab 中的 Select GPU -
Select 編輯-筆記本設置-硬件加速器-GPU-保存

不建議將ImageDataGenerator用於新代碼。 相反,您可以在 model 訓練中直接通過圖層使用這些增強功能,如下所示:

classifier = tf.keras.Sequential([
#data augmention layers
  layers.Resizing(IMG_SIZE, IMG_SIZE),
  layers.Rescaling(1./255),
  layers.RandomFlip("horizontal"),
  layers.RandomZoom(0.1),
#model building layers
  layers.Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'),
  layers.MaxPooling2D(pool_size = (2, 2)),
  layers.Conv2D(32, (3, 3), activation = 'relu'),
  layers.MaxPooling2D(pool_size = (2, 2)),
  layers.Flatten(),
  layers.Dense(units = 128, activation = 'relu'),
  layers.Dense(units = 1, activation = 'sigmoid')
])

此外,您應該使用image_dataset_from_directory導入圖像數據集,該數據集從目錄中的圖像文件生成tf.data.Dataset 請參考復制代碼的要點

注意: fit() 根據 batch_size 從訓練集中自動計算 Steps_per_epoch。
Steps_per_epoch = len(training_dataset)//batch_size

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM