[英]“WARNING:tensorflow:Your input ran out of data” Error appearing when training Keras Model
The full warning is: WARNING:tensorflow:Your input ran out of data;完整的警告是:WARNING:tensorflow:Your input run out of data; interrupting training.中断训练。 Make sure that your dataset or generator can generate at least steps_per_epoch * epochs
batches (in this case, 3400 batches).确保您的数据集或生成器至少可以生成steps_per_epoch * epochs
批次(在本例中为 3400 个批次)。 You may need to use the repeat() function when building your dataset.构建数据集时,您可能需要使用 repeat() function。
# importing libraries
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import tensorflow as tf
train_data_dir = 'marvel/train'
validation_data_dir = 'marvel/valid'
nb_train_samples = 2584
nb_validation_samples = 451
epochs = 100
batch_size_train = 76
batch_size_val = 41
if K.image_data_format() == 'channels_first':
input_shape = (3, 200, 200)
else:
input_shape = (200, 200, 3)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(200, 200, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(8, activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(train_data_dir,
target_size=(200, 200),
batch_size=batch_size_train,
classes=['black widow', 'captain america', 'doctor strange', 'hulk', 'iron man', 'loki', 'spiderman', 'thanos'],
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(200, 200),
batch_size=batch_size_val, class_mode='categorical')
model.fit(train_generator,
steps_per_epoch=nb_train_samples // batch_size_train,
epochs=epochs, validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size_val)
model.save_weights('characterImg.h5')
print("Saved model characterImg.h5")
The above is my code.以上是我的代码。 Can anyone help me understand what the error actually means?谁能帮我理解错误的实际含义? I'm having a lot of trouble with it.我有很多麻烦。 Thank you!谢谢! (Let me know if you need more info) (如果您需要更多信息,请告诉我)
Okay I'm not sure if this will work for everyone but to fix this I simply deleted the line好的,我不确定这是否适用于所有人,但为了解决这个问题,我只是删除了该行
steps_per_epoch=nb_train_samples // batch_size_train,
and it worked.它奏效了。 Everything I realise it's not ideal but for those looking for a desperate fix this might do you我意识到这并不理想,但对于那些寻求绝望解决方案的人来说,这可能对你有用
Looks like your dataset length is less than your nb_train_samples
/ nb_validation_samples
.看起来您的数据集长度小于您的nb_train_samples
/ nb_validation_samples
。
Add repeat()
call before fitting:在拟合之前添加repeat()
调用:
train_generator = train_generator.repeat()
validation_generator = validation_generator.repeat()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.