简体   繁体   English

训练精度很高,训练过程中损失少,但分类不好

[英]Very high training accuracy and low loss during training, but bad classification

I train a neural network on image descriptors belonging to 3 classes (two species of animals, and one group of landscape images). 我在属于3类(两种动物和一组风景图像)的图像描述符上训练了神经网络。 These descriptors have been pre-computed with VGG16 (without the last fully-connected layers), and have given good results with other classifiers (SVM). 这些描述符已使用VGG16(没有最后一个完全连接的层)进行了预先计算,并与其他分类器(SVM)一起给出了良好的结果。

This is my model: 这是我的模型:

model = keras.models.Sequential()
model.add(keras.layers.Dense(256, input_shape = (25088,), activation = 'relu'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(len(classes), activation = 'softmax'))
model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])

I train it like that: 我这样训练:

model.fit(
    X,
    y,
    epochs = 50,
    batch_size = 32,
    validation_split = 0.3,
    class_weight = class_weights
)

The datasets for the three classes are imbalanced: class 0 has 2135 items, class 1 has 1472, and class 2 has 760. I use class_weights to compensate: 这三个类别的数据集不平衡:类别0有2135个项目,类别1有1472,类别2有760。我使用class_weights进行补偿:

class_weights = {c: len(y) / np.sum(y[:,c] == 1.) for c in range(y.shape[1])}

Its value is {0: 2.045433255269321, 1: 2.9667119565217392, 2: 5.746052631578947} . 其值为{0: 2.045433255269321, 1: 2.9667119565217392, 2: 5.746052631578947}

The testing accuracy and loss during training are very good (not so much on the validation sets): 训练期间的测试准确性和损失都非常好(在验证集上并没有那么多):

Epoch 1/50
3056/3056 [==============================] - 16s 5ms/step - loss: 3.1452 - acc: 0.9107 - val_loss: 54.5996 - val_acc: 0.3997
Epoch 2/50
3056/3056 [==============================] - 2s 523us/step - loss: 1.5053 - acc: 0.9627 - val_loss: 53.9704 - val_acc: 0.4134
Epoch 3/50
3056/3056 [==============================] - 2s 521us/step - loss: 1.3939 - acc: 0.9607 - val_loss: 54.4188 - val_acc: 0.4043
Epoch 4/50
3056/3056 [==============================] - 2s 522us/step - loss: 1.5265 - acc: 0.9545 - val_loss: 53.7266 - val_acc: 0.4195
Epoch 5/50
3056/3056 [==============================] - 2s 522us/step - loss: 1.4650 - acc: 0.9562 - val_loss: 54.0863 - val_acc: 0.4111
Epoch 6/50
3056/3056 [==============================] - 2s 521us/step - loss: 1.3557 - acc: 0.9607 - val_loss: 53.8348 - val_acc: 0.4172
Epoch 7/50
3056/3056 [==============================] - 2s 520us/step - loss: 1.0602 - acc: 0.9699 - val_loss: 54.1266 - val_acc: 0.4104
Epoch 8/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.8097 - acc: 0.9781 - val_loss: 55.3352 - val_acc: 0.3852
Epoch 9/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.8912 - acc: 0.9741 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 10/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.9512 - acc: 0.9732 - val_loss: 54.1430 - val_acc: 0.4096
Epoch 11/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.9200 - acc: 0.9745 - val_loss: 54.4828 - val_acc: 0.4027
Epoch 12/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.7612 - acc: 0.9797 - val_loss: 53.9240 - val_acc: 0.4150
Epoch 13/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.6478 - acc: 0.9820 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 14/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.9011 - acc: 0.9764 - val_loss: 54.3105 - val_acc: 0.4073
Epoch 15/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.8652 - acc: 0.9787 - val_loss: 54.0913 - val_acc: 0.4119
Epoch 16/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.7115 - acc: 0.9800 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 17/50
3056/3056 [==============================] - 2s 518us/step - loss: 0.6954 - acc: 0.9804 - val_loss: 53.8322 - val_acc: 0.4172
Epoch 18/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.7845 - acc: 0.9794 - val_loss: 55.1453 - val_acc: 0.3883
Epoch 19/50
3056/3056 [==============================] - 2s 520us/step - loss: 0.8089 - acc: 0.9777 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 20/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6779 - acc: 0.9820 - val_loss: 54.0726 - val_acc: 0.4119
Epoch 21/50
3056/3056 [==============================] - 2s 517us/step - loss: 0.5939 - acc: 0.9840 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 22/50
3056/3056 [==============================] - 2s 518us/step - loss: 0.6781 - acc: 0.9810 - val_loss: 54.1643 - val_acc: 0.4104
Epoch 23/50
3056/3056 [==============================] - 2s 514us/step - loss: 0.6912 - acc: 0.9804 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 24/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.6296 - acc: 0.9830 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 25/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.8910 - acc: 0.9748 - val_loss: 55.4755 - val_acc: 0.3814
Epoch 26/50
3056/3056 [==============================] - 2s 522us/step - loss: 0.7642 - acc: 0.9794 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 27/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.6787 - acc: 0.9827 - val_loss: 54.3102 - val_acc: 0.4073
Epoch 28/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.6762 - acc: 0.9804 - val_loss: 53.9819 - val_acc: 0.4142
Epoch 29/50
3056/3056 [==============================] - 2s 519us/step - loss: 0.6418 - acc: 0.9823 - val_loss: 54.1996 - val_acc: 0.4096
Epoch 30/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6038 - acc: 0.9833 - val_loss: 55.0238 - val_acc: 0.3921
Epoch 31/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.6223 - acc: 0.9836 - val_loss: 53.8964 - val_acc: 0.4150
Epoch 32/50
3056/3056 [==============================] - 2s 523us/step - loss: 0.6354 - acc: 0.9830 - val_loss: 54.3212 - val_acc: 0.4058
Epoch 33/50
3056/3056 [==============================] - 2s 561us/step - loss: 0.6124 - acc: 0.9840 - val_loss: 54.4909 - val_acc: 0.4035
Epoch 34/50
3056/3056 [==============================] - 2s 539us/step - loss: 0.5937 - acc: 0.9846 - val_loss: 53.9819 - val_acc: 0.4142
Epoch 35/50
3056/3056 [==============================] - 2s 524us/step - loss: 0.4993 - acc: 0.9849 - val_loss: 53.9906 - val_acc: 0.4134
Epoch 36/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.5461 - acc: 0.9846 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 37/50
3056/3056 [==============================] - 2s 530us/step - loss: 0.4849 - acc: 0.9859 - val_loss: 54.0580 - val_acc: 0.4119
Epoch 38/50
3056/3056 [==============================] - 2s 527us/step - loss: 0.4078 - acc: 0.9882 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 39/50
3056/3056 [==============================] - 2s 526us/step - loss: 0.5824 - acc: 0.9840 - val_loss: 54.4196 - val_acc: 0.4050
Epoch 40/50
3056/3056 [==============================] - 2s 525us/step - loss: 0.4924 - acc: 0.9863 - val_loss: 54.3267 - val_acc: 0.4058
Epoch 41/50
3056/3056 [==============================] - 2s 515us/step - loss: 0.4689 - acc: 0.9876 - val_loss: 53.8725 - val_acc: 0.4165
Epoch 42/50
3056/3056 [==============================] - 2s 516us/step - loss: 0.5954 - acc: 0.9853 - val_loss: 54.4130 - val_acc: 0.4043
Epoch 43/50
3056/3056 [==============================] - 2s 521us/step - loss: 0.5741 - acc: 0.9849 - val_loss: 53.9755 - val_acc: 0.4142
Epoch 44/50
3056/3056 [==============================] - 2s 535us/step - loss: 0.4941 - acc: 0.9856 - val_loss: 53.7995 - val_acc: 0.4180
Epoch 45/50
3056/3056 [==============================] - 2s 528us/step - loss: 0.5669 - acc: 0.9827 - val_loss: 53.8360 - val_acc: 0.4172
Epoch 46/50
3056/3056 [==============================] - 2s 528us/step - loss: 0.4975 - acc: 0.9856 - val_loss: 54.0184 - val_acc: 0.4134
Epoch 47/50
3056/3056 [==============================] - 2s 533us/step - loss: 0.5870 - acc: 0.9827 - val_loss: 53.9454 - val_acc: 0.4150
Epoch 48/50
3056/3056 [==============================] - 2s 536us/step - loss: 0.4608 - acc: 0.9863 - val_loss: 53.9089 - val_acc: 0.4157
Epoch 49/50
3056/3056 [==============================] - 2s 554us/step - loss: 0.9252 - acc: 0.9777 - val_loss: 54.1243 - val_acc: 0.4104
Epoch 50/50
3056/3056 [==============================] - 2s 576us/step - loss: 0.4731 - acc: 0.9876 - val_loss: 54.2266 - val_acc: 0.4088

But when I test this model on a set of 24 images (12 from class 0 and 12 from class 2), I get unsatisfying results. 但是,当我在一组24张图像上测试该模型(来自0类的12张图像和来自2类的12张图像)时,结果令人不满意。 These are the probabilities the model gives for images of class 0: 这些是模型为0类图像提供的概率:

[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]

...and for images of class 2: ...对于第2类的图片:

[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1.0000000e+00 1.2065205e-22 0.0000000e+00]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]
[[1. 0. 0.]]

It seems that the model is very biased towards class 0. This makes me think that I haven't used class_weight right. 似乎该模型非常偏向于类0。这使我认为我没有正确使用class_weight

Where could this bias come from? 这种偏见从何而来?

Assuming you used some of your data for validation (during training) I would say you are strongly overfitting. 假设您使用了一些数据进行验证(在培训期间),我会说您过度拟合。

Your vall_acc allways stays at around 40% which is even lower than the amount of class 1 images you should have in your validation set. 您的vall_acc始终保持在40%左右,这甚至比您在验证集中应具有的1类映像的数量还要低。

3056/3056 [==============================] - 2s 576us/step - loss: 0.4731 - acc: 0.9876 - val_loss: 54.2266 - val_acc: 0.4088

In other words, your network is memorizing your training data. 换句话说,您的网络正在存储您的训练数据。 This can happen, amongst other things, if you have not enough data or a too complex network. 如果您没有足够的数据或网络太复杂,则可能发生这种情况。

Did you choose your validation- and test-data randomly? 您是否随机选择了验证数据和测试数据? Because if you didn't there might be a difference between those training and test data you are not aware of. 因为如果您不这样做,您可能不会意识到这些培训和测试数据之间的差异。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 验证准确率非常低,但训练准确率很高 - Very low validation accuracy but high training accuracy 在 CNN 中训练每个 epoch 期间,验证准确度很高,但分类报告中的最终准确度非常低,这是什么意思? - validation accuracy is high during training each epoch in a CNN but final accuracy in classification report very low what does it mean? 低训练损失,高验证损失和低验证准确率 - Low training loss with high validation loss and low validation accuracy 训练精度高,验证精度低 CNN二元分类 keras - High training accuracy, low validation accuracy CNN binary classification keras Tensorflow Keras - 训练时精度高,预测时精度低 - Tensorflow Keras - High accuracy during training, low accuracy during prediction 训练和验证的准确性很高,但测试集的准确性却很低 - High accuracy on both training and validation but very low on test set 如何在训练过程中纠正不稳定的损失和准确率? (二元分类) - How to correct unstable loss and accuracy during training? (binary classification) 高精度训练但低精度测试/预测 - High accuracy training but low accuracy test/prediction 文本分类的训练和验证准确性和损失 - training and validation accuracy and loss for text classification 训练和验证期间的准确性高,使用相同数据集进行预测期间的准确性低 - High accuracy during training and validation, low accuracy during prediction with the same dataset
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM