简体   繁体   English

在训练阶段我的 CNN 验证准确性和损失函数的奇怪行为

[英]Weird behaviour for my CNN validation accuracy and loss function during training phase

Here is the architecture of my network :这是我的网络架构:

cnn3 = Sequential()
cnn3.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
cnn3.add(MaxPooling2D((2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
cnn3.add(MaxPooling2D(pool_size=(2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
cnn3.add(Dropout(0.2))
cnn3.add(Flatten())
cnn3.add(Dense(128, activation='relu'))
cnn3.add(Dropout(0.4)) # 0.3
cnn3.add(Dense(4, activation='softmax'))
cnn3.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

When I have plotted both training and validation accuracies and loss functions I got the next two figures:当我绘制了训练和验证精度以及损失函数时,我得到了接下来的两个数字:

I could not understand why both validation accuracy and loss are not following training accuracy and loss ?我不明白为什么验证准确性和损失都没有遵循训练准确性和损失?

损失

准确性

Your validation is following the train loss and accuracy.您的验证遵循列车损失和准确性。 There is just more jitter in the validation lines due to being a smaller data set.由于数据集较小,验证行中的抖动更多。 The offset between train and validation might be du to some degree of overfitting.训练和验证之间的偏移可能是由于某种程度的过度拟合。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM