[英]No val_loss and val_accuracy keys when I've already had validation_data in model.fit()'s argument
Here's the image augmentation code:这是图像增强代码:
batch_size = 16
train_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2)
# test_datagen = ImageDataGenerator(rescale=1./255)
# Use flow from dataframe
train_generator = train_datagen.flow_from_dataframe(
dataframe=train,
directory="train_images",
x_col="id",
y_col=["not_ready", "ready"],
target_size=(300, 300),
batch_size=batch_size,
class_mode="raw",
color_mode="grayscale",
subset="training")
validation_generator = train_datagen.flow_from_dataframe(
dataframe=train,
directory="train_images",
x_col="id",
y_col=["not_ready", "ready"],
target_size=(300, 300),
batch_size=batch_size,
class_mode="raw",
color_mode="grayscale",
subset="validation")
Setup the model:设置 model:
early_stopping = EarlyStopping(monitor='loss',mode='min',verbose=1,patience=7, restore_best_weights=True)
opt = Adam(learning_rate=0.0002)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator,
steps_per_epoch=train_generator.n // batch_size,
epochs=100,
validation_data=validation_generator,
validation_steps=validation_generator.n // batch_size,
callbacks=[early_stopping])
And print the history keys:并打印历史键:
print(history.history.keys())
But the results:但结果:
dict_keys(['loss', 'accuracy'])
There's no val_loss
and val_accuracy
when I've already had validation_data
.当我已经有了validation_data
时,就没有val_loss
和val_accuracy
了。 Why is that and how to make them appear?为什么会这样以及如何使它们出现?
First: Make sure your model is running without your validation_generator
.第一:确保您的 model 在没有validation_generator
的情况下运行。 Second: Make sure your validation_generator
really does have data by iterating through a few samples.第二:通过迭代几个样本来确保您的validation_generator
确实有数据。 Here is a working example:这是一个工作示例:
import tensorflow as tf
flowers = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
img_gen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.2)
BATCH_SIZE = 32
train_generator = img_gen.flow_from_directory(flowers, class_mode='sparse', batch_size=BATCH_SIZE, target_size=(300, 300), shuffle=True, subset="training", color_mode="grayscale")
validation_generator = img_gen.flow_from_directory(flowers, class_mode='sparse', batch_size=BATCH_SIZE, target_size=(300, 300), shuffle=True, subset="validation", color_mode="grayscale")
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu', input_shape=(300, 300, 1)),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(5)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
epochs=10
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='loss',mode='min',verbose=1,patience=7, restore_best_weights=True)
history = model.fit(train_generator,
steps_per_epoch=train_generator.n // BATCH_SIZE,
epochs=1,
validation_data=validation_generator,
validation_steps=validation_generator.n // BATCH_SIZE,
callbacks=[early_stopping])
print(history.history.keys())
Found 2939 images belonging to 5 classes.
Found 731 images belonging to 5 classes.
91/91 [==============================] - 44s 462ms/step - loss: 1.8690 - accuracy: 0.2298 - val_loss: 1.6060 - val_accuracy: 0.2443
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Also check the parameter validation_steps
in model.fit
, if for example it is 0, you will not see the validation loss and accuracy in history.history.keys()
.还要检查model.fit
中的参数validation_steps
,例如,如果它为 0,您将不会在history.history.keys()
中看到验证损失和准确性。 If that is the case, try not setting the parameter at all:如果是这种情况,请尝试根本不设置参数:
history = model.fit(train_generator,
steps_per_epoch=train_generator.n // BATCH_SIZE,
epochs=1,
validation_data=validation_generator,
callbacks=[early_stopping])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.