[英]Why is the training accuracy fluctuating?
I'm working with a video classification of 5 classes and using TimeDistributed CNN model in Google Colab platform.我正在处理 5 个类的视频分类,并在 Google Colab 平台中使用 TimeDistributed CNN model。 The training dataset contains 80 videos containing 5 frames each.
训练数据集包含 80 个视频,每个视频包含 5 帧。 The validation dataset contains 20 videos containing 5 frames each.
验证数据集包含 20 个视频,每个视频包含 5 帧。 The batch size I used is 64. So, in total, I'm working with 100 videos.
我使用的批量大小是 64。所以,我总共处理了 100 个视频。 I compiled the model using Adam optimizer and categorical cross_entropy loss.
我使用 Adam 优化器和分类 cross_entropy 损失编译了 model。
model = Sequential()
input_shape=(5, 128, 128, 3)
model.add(TimeDistributed(Conv2D(32, (3, 3), strides=(1, 1),
activation='relu', padding='same'), input_shape=input_shape))
model.add(TimeDistributed(MaxPooling2D((2, 2))))
model.add(TimeDistributed(Conv2D(64, (3, 3), strides=(1, 1),
activation='relu', padding='same')))
model.add(TimeDistributed(Conv2D(128, (3, 3), strides=(1, 1),
activation='relu', padding='same')))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(MaxPooling2D((2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(GRU(64, return_sequences=False))
model.add(BatchNormalization())
model.add((Dense(128, activation='relu')))
model.add(Dense(5, activation='softmax'))
from tensorflow.keras.optimizers import Adam
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.0001),
metrics=['accuracy'])
But, after fitting this model with the dataset, the training accuracy curve is fluctuating like this:但是,在将这个 model 与数据集拟合后,训练精度曲线波动如下:
Can anyone help me out to understand the reason behind this fluctuation?谁能帮我理解这种波动背后的原因?
You can try one or two things to stabilize the training:您可以尝试一两件事来稳定训练:
You can try different batch sizes of 4, 8, 16, 32, 64. You can generate different plots.您可以尝试 4、8、16、32、64 的不同批次大小。您可以生成不同的图。 Have a look at this link .
看看这个链接。 It'll generate mini plots for each batch size.
它将为每个批量大小生成迷你图。
You can also alter the learning rate.您还可以更改学习率。 You can apply Learning Rate scheduler or Reduce LR on plateau by directly calling keras callbacks.
您可以通过直接调用 keras 回调来应用学习率调度程序或在高原减少 LR。 Alternatively, there is Cyclic LR that try to finds out the optimal learning rate.
或者,有 Cyclic LR 试图找出最佳学习率。 paper Github
纸Github
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.