繁体   English   中英

如何将 GPU 与 Keras 一起使用?

[英]How can I use a GPU with Keras?

我的问题是,我正在尝试在 google colab 中使用 Keras 训练卷积神经网络,它能够区分狗和猫,但是在进入训练阶段时,我的 model 需要很长时间才能训练,我想要了解如何以正确的方式使用 GPU 以缩短培训时间。


from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('/content/drive/MyDrive/Colab Notebooks/files/dataset_CNN/training_set',
                                                 target_size = (64, 64),
                                                 batch_size = 32,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('/content/drive/MyDrive/Colab Notebooks/files/dataset_CNN/test_set',
                                            target_size = (64, 64),
                                            batch_size = 32,
                                            class_mode = 'binary')
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('GPU device not found')
with tf.device('/device:GPU:0'):#I tried to put this part that I found by researching on the internet 
  
  classifier = Sequential()
  classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
  classifier.add(MaxPooling2D(pool_size = (2, 2)))
  classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
  classifier.add(MaxPooling2D(pool_size = (2, 2)))
  classifier.add(Flatten())
  classifier.add(Dense(units = 128, activation = 'relu'))
  classifier.add(Dense(units = 1, activation = 'sigmoid'))
  classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
  classifier.fit(training_set,
                         steps_per_epoch = 8000,
                         epochs = 25,
                         validation_data = test_set,
                         validation_steps = 2000)

之前我没有把这部分代码放在"with tf.device('/device:GPU:0')",我在inte.net的一个例子里看到了这部分代码,但是还是慢训练。 我已经使用以下方法检查了可用的 GPU:


device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('GPU device not found')


到 Google Colab 中的 Select GPU -
Select 编辑-笔记本设置-硬件加速器-GPU-保存

不建议将ImageDataGenerator用于新代码。 相反,您可以在 model 训练中直接通过图层使用这些增强功能,如下所示:

classifier = tf.keras.Sequential([
#data augmention layers
  layers.Resizing(IMG_SIZE, IMG_SIZE),
  layers.Rescaling(1./255),
  layers.RandomFlip("horizontal"),
  layers.RandomZoom(0.1),
#model building layers
  layers.Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'),
  layers.MaxPooling2D(pool_size = (2, 2)),
  layers.Conv2D(32, (3, 3), activation = 'relu'),
  layers.MaxPooling2D(pool_size = (2, 2)),
  layers.Flatten(),
  layers.Dense(units = 128, activation = 'relu'),
  layers.Dense(units = 1, activation = 'sigmoid')
])

此外,您应该使用image_dataset_from_directory导入图像数据集,该数据集从目录中的图像文件生成tf.data.Dataset 请参考复制代码的要点

注意: fit() 根据 batch_size 从训练集中自动计算 Steps_per_epoch。
Steps_per_epoch = len(training_dataset)//batch_size

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM