简体   繁体   English

寻找您对我的 Loss /accuracy vs epoch 曲线的宝贵建议

[英]Looking for your valuable suggestions on my Loss /accuracy vs epoch curve

My Keras-Tensorflow model is behaving like the below image.我的 Keras-Tensorflow 模型表现如下图所示。 Where I can see training and validation loss is performing well, but the training and validation accuracy is quite abnormal.我可以看到训练和验证损失表现良好,但训练和验证的准确性非常不正常。 I think that the validation dataset might be very easy than the training set.我认为验证数据集可能比训练集更容易。 Hence I am getting high validation accuracy.因此,我获得了很高的验证准确性。 I am looking forward to your kind suggestions.我期待着您的善意建议。

曲线

The plot that you displayed right here looks normal from the viewpoint of metrics and loss during a training.从训练期间的指标和损失的角度来看,您在此处显示的图看起来很正常。

It is common to see small spikes, since we are using batch_training.看到小峰值是很常见的,因为我们使用的是 batch_training。 Also, when you see those spikes in loss(the loss increases), the accuracy also decreases.此外,当您看到损失的峰值(损失增加)时,准确性也会降低。

Therefore, nothing to worry about the plot in itself.所以,剧情本身没什么好担心的。

However, your observation with regard to the validation accuracy is indeed sensible: most of the times, this happens due to the fact that the validation dataset is easier.但是,您对验证准确性的观察确实是明智的:大多数情况下,发生这种情况是因为验证数据集更容易。

One way to deal with this issue is to use cross-validation, in order to see if this phenomenon still persists.处理此问题的一种方法是使用交叉验证,以查看此现象是否仍然存在。

Cross-validation is a technique for model validation, in which, at each iteration/fold, there is a different part of your dataset reserved for training and validation.交叉验证是一种模型验证技术,其中在每次迭代/折叠时,数据集的不同部分保留用于训练和验证。 The picture below summaries what I have just written.下图总结了我刚刚写的内容。

展示了交叉验证技术;请注意,正确的命名法是验证集,而不是测试集

Another reason why this phenomenon takes place is due to the regularization technique called Dropout.发生这种现象的另一个原因是称为 Dropout 的正则化技术。 As you might know, during the training phase, a dropout applied at a certain layer implies the random turn-off/deactivation of certain percentage of neurons.您可能知道,在训练阶段,在某一层应用的 dropout 意味着一定百分比的神经元的随机关闭/停用。 This in turn penalises the performance on the training set, but at the same time the risk of overfitting is mitigated.这反过来会影响训练集的表现,但同时降低了过度拟合的风险。 Therefore, many times when Dropout is used during the training, there may be the case that the validation accuracy is bigger, since during prediction on validation the Dropout is not enabled.因此,很多时候在训练过程中使用 Dropout 时,可能会出现验证准确度更大的情况,因为在验证预测时没有启用 Dropout。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在CNN中绘制纪元与val_acc图以及纪元与val_loss图? - How to plot epoch vs. val_acc and epoch vs. val_loss graph in CNN? 当我训练 LSTM 模型时,我的损失显示为 NaN,准确度为 0 - My loss is showing to be NaN and accuracy to be 0 when I'm training my LSTM model CNN 的加载精度和损失时期 - loading accuracy and loss epochs of CNN 我的时间序列预测变压器模型的训练损失和准确度都在下降 - Training Loss and Accuracy both decreasing for my transformer model for Time Series Prediction PyTorch 每个时期后的平均准确度 - PyTorch Average Accuracy after each epoch Tensorflow损失和精度误差神经网络 - Tensorflow Loss and accuracy error Neural Network 通过装饰器寻找有关数据模型解码/编码的建议 - Looking for suggestions on datamodel decode/encode via decorators 为什么 CNN model 的损失在整个 epoch 中变化不大? - Why did the loss of CNN model change little throughout the whole epoch? 在Python3中将一个float的元组转换为一个数组时的精度损失 - Accuracy loss when converting a tuple of floats into an array in python3 多层神经网络-损失函数为负且精度(低)保持不变 - Multilayer Neural Network - Loss Function is negative and Accuracy (low) remains unchanged
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM