I am using tf.keras.preprocessing.image_dataset_from_directory to load my large dataset. the problem is the training phase is so slow when I exploit this method in fit_generator()
although I use the google Colab GPU. The code is:
image_size = (224, 224)
batch_size = 32
data = tf.keras.preprocessing.image_dataset_from_directory(
'/content/drive/My Drive/dataScience/september exam/data/trainImg',
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
for the training:
model.fit_generator(train_dataset,
epochs=50,
verbose=1)
您可以尝试将图像形状缩小到 128x128,减少batch_size 并使用model.fit()
的GPU,您应该使用model.fit()
希望这有助于您进行时间优化。
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.