简体   繁体   中英

Weird behaviour for my CNN validation accuracy and loss function during training phase

Here is the architecture of my network :

cnn3 = Sequential()
cnn3.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
cnn3.add(MaxPooling2D((2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
cnn3.add(MaxPooling2D(pool_size=(2, 2)))
cnn3.add(Dropout(0.25))
cnn3.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
cnn3.add(Dropout(0.2))
cnn3.add(Flatten())
cnn3.add(Dense(128, activation='relu'))
cnn3.add(Dropout(0.4)) # 0.3
cnn3.add(Dense(4, activation='softmax'))
cnn3.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

When I have plotted both training and validation accuracies and loss functions I got the next two figures:

I could not understand why both validation accuracy and loss are not following training accuracy and loss ?

损失

准确性

Your validation is following the train loss and accuracy. There is just more jitter in the validation lines due to being a smaller data set. The offset between train and validation might be du to some degree of overfitting.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM