简体   繁体   中英

I have trained a CNN model, for 10 classes. Model is doing good on training accuracy but is stuck in validation accuracy

Model having 10 classes:

classes = ['Begin', 'Choose', 'Connection', 'Navigation', 'Next', 'Previous', 'Start', 'Stop', 'Hello', 'Web'] 

Initially, I had 1100 images , 300 images and 100 images for training , validation and test data. After I performed augmentation on the training dataset to 3300 images and rest same.

Model I am using is :

model = Sequential()
        model.add(Convolution3D(64, (5, 5, 5), padding='same', activation='relu', input_shape=(22, 64, 64, 1)))
        model.add(BatchNormalization())
        model.add(MaxPooling3D(pool_size=(3, 3, 3)))
        model.add(Dropout(dropout_rate))
        model.add(Convolution3D(128, (5, 5, 5), padding='same', activation='relu'))
        model.add(Convolution3D(128, (5, 5, 5), padding='same', activation='relu'))
        model.add(BatchNormalization())
        model.add(MaxPooling3D(pool_size=(3, 3, 3)))
        model.add(Dropout(dropout_rate))
        model.add(Flatten())
        model.add(Dense(10, activation='softmax'))

with 0.4 as dropout rate.

opt = tf.keras.optimizers.Adam(learning_rate=1e-4)
model = get_model(model_name, 0.4)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# Initialize Keras Callback
log_dir = "/content/gdrive/MyDrive/Lip Reading/logs/{}".format(model_name)

tensorboard = TensorBoard(log_dir=log_dir,
                          write_graph=True, write_images=True, histogram_freq=1)
    
filepath = models_dir + model_name + ".h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1,
                             save_best_only=True, save_weights_only=True, mode='max')

earlyStopping = EarlyStopping(monitor='val_accuracy', patience=10, verbose=1, mode='max')

csv_logger = CSVLogger('/content/gdrive/MyDrive/Lip Reading/outputs/log_{}.csv'.format(model_name), append=True, separator=';')

learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience=3, verbose=1, factor=0.5, min_lr=0.0001)
nb_epoch = 30
batch_size = 32

t1 = time.time()
history = model.fit(
          X_train, y_train,
          epochs=nb_epoch,
          validation_data=(X_val, y_val),
          verbose=True,
          callbacks = [tensorboard, checkpoint, earlyStopping, csv_logger] # learning_rate_reduction
)
t2 = time.time()
print()
print(f"Training time : {t2 - t1} secs.")

Model history:

Epoch 1/30
104/104 [==============================] - ETA: 0s - loss: 3.5245 - accuracy: 0.1379
Epoch 1: val_accuracy improved from -inf to 0.10000, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 58s 425ms/step - loss: 3.5245 - accuracy: 0.1379 - val_loss: 2.5218 - val_accuracy: 0.1000
Epoch 2/30
104/104 [==============================] - ETA: 0s - loss: 2.8112 - accuracy: 0.1942
Epoch 2: val_accuracy did not improve from 0.10000
104/104 [==============================] - 42s 404ms/step - loss: 2.8112 - accuracy: 0.1942 - val_loss: 2.8191 - val_accuracy: 0.1000
Epoch 3/30
104/104 [==============================] - ETA: 0s - loss: 2.3391 - accuracy: 0.3130
Epoch 3: val_accuracy improved from 0.10000 to 0.13333, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 43s 416ms/step - loss: 2.3391 - accuracy: 0.3130 - val_loss: 2.7617 - val_accuracy: 0.1333
Epoch 4/30
104/104 [==============================] - ETA: 0s - loss: 1.7170 - accuracy: 0.4715
Epoch 4: val_accuracy did not improve from 0.13333
104/104 [==============================] - 43s 415ms/step - loss: 1.7170 - accuracy: 0.4715 - val_loss: 2.5163 - val_accuracy: 0.1100
Epoch 5/30
104/104 [==============================] - ETA: 0s - loss: 1.2597 - accuracy: 0.5833
Epoch 5: val_accuracy improved from 0.13333 to 0.29000, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 421ms/step - loss: 1.2597 - accuracy: 0.5833 - val_loss: 1.9042 - val_accuracy: 0.2900
Epoch 6/30
104/104 [==============================] - ETA: 0s - loss: 0.9902 - accuracy: 0.6609
Epoch 6: val_accuracy improved from 0.29000 to 0.44333, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 425ms/step - loss: 0.9902 - accuracy: 0.6609 - val_loss: 1.6684 - val_accuracy: 0.4433
Epoch 7/30
104/104 [==============================] - ETA: 0s - loss: 0.7206 - accuracy: 0.7515
Epoch 7: val_accuracy improved from 0.44333 to 0.48333, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 427ms/step - loss: 0.7206 - accuracy: 0.7515 - val_loss: 1.8783 - val_accuracy: 0.4833
Epoch 8/30
104/104 [==============================] - ETA: 0s - loss: 0.5619 - accuracy: 0.8097
Epoch 8: val_accuracy improved from 0.48333 to 0.50333, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 428ms/step - loss: 0.5619 - accuracy: 0.8097 - val_loss: 1.7080 - val_accuracy: 0.5033
Epoch 9/30
104/104 [==============================] - ETA: 0s - loss: 0.4290 - accuracy: 0.8482
Epoch 9: val_accuracy improved from 0.50333 to 0.58667, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 425ms/step - loss: 0.4290 - accuracy: 0.8482 - val_loss: 1.6477 - val_accuracy: 0.5867
Epoch 10/30
104/104 [==============================] - ETA: 0s - loss: 0.4348 - accuracy: 0.8485
Epoch 10: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 421ms/step - loss: 0.4348 - accuracy: 0.8485 - val_loss: 2.0925 - val_accuracy: 0.5200
Epoch 11/30
104/104 [==============================] - ETA: 0s - loss: 0.3185 - accuracy: 0.8900
Epoch 11: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 421ms/step - loss: 0.3185 - accuracy: 0.8900 - val_loss: 1.9539 - val_accuracy: 0.5767
Epoch 12/30
104/104 [==============================] - ETA: 0s - loss: 0.2538 - accuracy: 0.9042
Epoch 12: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 419ms/step - loss: 0.2538 - accuracy: 0.9042 - val_loss: 1.8285 - val_accuracy: 0.5833
Epoch 13/30
104/104 [==============================] - ETA: 0s - loss: 0.1804 - accuracy: 0.9324
Epoch 13: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 420ms/step - loss: 0.1804 - accuracy: 0.9324 - val_loss: 1.7743 - val_accuracy: 0.5667
Epoch 14/30
104/104 [==============================] - ETA: 0s - loss: 0.1746 - accuracy: 0.9427
Epoch 14: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 420ms/step - loss: 0.1746 - accuracy: 0.9427 - val_loss: 1.7986 - val_accuracy: 0.5633
Epoch 15/30
104/104 [==============================] - ETA: 0s - loss: 0.1396 - accuracy: 0.9491
Epoch 15: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 419ms/step - loss: 0.1396 - accuracy: 0.9491 - val_loss: 2.1228 - val_accuracy: 0.5867
Epoch 16/30
104/104 [==============================] - ETA: 0s - loss: 0.1172 - accuracy: 0.9582
Epoch 16: val_accuracy did not improve from 0.58667
104/104 [==============================] - 44s 419ms/step - loss: 0.1172 - accuracy: 0.9582 - val_loss: 2.5140 - val_accuracy: 0.5567
Epoch 17/30
104/104 [==============================] - ETA: 0s - loss: 0.1153 - accuracy: 0.9570
Epoch 17: val_accuracy did not improve from 0.58667
104/104 [==============================] - 43s 418ms/step - loss: 0.1153 - accuracy: 0.9570 - val_loss: 2.3834 - val_accuracy: 0.5733
Epoch 18/30
104/104 [==============================] - ETA: 0s - loss: 0.1320 - accuracy: 0.9545
Epoch 18: val_accuracy improved from 0.58667 to 0.59000, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 421ms/step - loss: 0.1320 - accuracy: 0.9545 - val_loss: 2.4330 - val_accuracy: 0.5900
Epoch 19/30
104/104 [==============================] - ETA: 0s - loss: 0.0919 - accuracy: 0.9694
Epoch 19: val_accuracy did not improve from 0.59000
104/104 [==============================] - 43s 417ms/step - loss: 0.0919 - accuracy: 0.9694 - val_loss: 2.6975 - val_accuracy: 0.5333
Epoch 20/30
104/104 [==============================] - ETA: 0s - loss: 0.0731 - accuracy: 0.9755
Epoch 20: val_accuracy did not improve from 0.59000
104/104 [==============================] - 44s 419ms/step - loss: 0.0731 - accuracy: 0.9755 - val_loss: 2.8664 - val_accuracy: 0.5900
Epoch 21/30
104/104 [==============================] - ETA: 0s - loss: 0.0647 - accuracy: 0.9773
Epoch 21: val_accuracy improved from 0.59000 to 0.61333, saving model to /content/gdrive/MyDrive/Lip Reading/models/model_C.h5
104/104 [==============================] - 44s 422ms/step - loss: 0.0647 - accuracy: 0.9773 - val_loss: 2.3977 - val_accuracy: 0.6133
Epoch 22/30
104/104 [==============================] - ETA: 0s - loss: 0.0539 - accuracy: 0.9821
Epoch 22: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 417ms/step - loss: 0.0539 - accuracy: 0.9821 - val_loss: 2.7019 - val_accuracy: 0.5867
Epoch 23/30
104/104 [==============================] - ETA: 0s - loss: 0.1519 - accuracy: 0.9494
Epoch 23: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 417ms/step - loss: 0.1519 - accuracy: 0.9494 - val_loss: 2.5488 - val_accuracy: 0.5900
Epoch 24/30
104/104 [==============================] - ETA: 0s - loss: 0.1323 - accuracy: 0.9545
Epoch 24: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 417ms/step - loss: 0.1323 - accuracy: 0.9545 - val_loss: 2.2173 - val_accuracy: 0.5800
Epoch 25/30
104/104 [==============================] - ETA: 0s - loss: 0.1000 - accuracy: 0.9648
Epoch 25: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 416ms/step - loss: 0.1000 - accuracy: 0.9648 - val_loss: 2.6860 - val_accuracy: 0.5633
Epoch 26/30
104/104 [==============================] - ETA: 0s - loss: 0.0395 - accuracy: 0.9888
Epoch 26: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 417ms/step - loss: 0.0395 - accuracy: 0.9888 - val_loss: 2.7232 - val_accuracy: 0.5833
Epoch 27/30
104/104 [==============================] - ETA: 0s - loss: 0.0371 - accuracy: 0.9894
Epoch 27: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 416ms/step - loss: 0.0371 - accuracy: 0.9894 - val_loss: 2.6924 - val_accuracy: 0.5900
Epoch 28/30
104/104 [==============================] - ETA: 0s - loss: 0.0365 - accuracy: 0.9879
Epoch 28: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 416ms/step - loss: 0.0365 - accuracy: 0.9879 - val_loss: 3.0569 - val_accuracy: 0.5833
Epoch 29/30
104/104 [==============================] - ETA: 0s - loss: 0.0448 - accuracy: 0.9845
Epoch 29: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 417ms/step - loss: 0.0448 - accuracy: 0.9845 - val_loss: 3.1639 - val_accuracy: 0.5733
Epoch 30/30
104/104 [==============================] - ETA: 0s - loss: 0.0411 - accuracy: 0.9873
Epoch 30: val_accuracy did not improve from 0.61333
104/104 [==============================] - 43s 416ms/step - loss: 0.0411 - accuracy: 0.9873 - val_loss: 2.8516 - val_accuracy: 0.6033

Training time : 1343.9280362129211 secs.

According to me, it's showing overfitting. Any way to counter this problem?

You are right, that is a sign of overfitting. It may be caused by many problems. One of them is having a very complex Architecture for a small dataset. If you do an "overkill" with a small dataset it will overfit on training and may not generalize.

Generally, overfitting is good, it means the model can learn something. But the first thing I want to point out in your architecture is Dropout between Conv layers. That is not a very good practice. Mostly you should use them between Dense layers.

Also, it would be nice to know the dataset and what kind of augmentation you are doing because there are a lot of augmentations you can do. Because I have generated 10k+ images from just 400 with different augmentations and it worked. So it would be better to increase the size of the dataset with more augmentation and see how validation changes.

It also depends on the preprocessing of the dataset, there is some rule of thumb you should do before you feed the dataset into the network. There are a lot of things you should pay attention to. That is why it is hard.

Hope my answer will help you in any way

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM