简体   繁体   English

如何在损失正常的情况下修复深度学习中的零精度

[英]How to fix zero accuracy in Deep Learning while the loss is fine

I have a dataset of 50x22 which includes 22 features.我有一个 50x22 的数据集,其中包括 22 个特征。 The target is to classify the target which is scaled from 1 to 5, equivalently 5 classes.目标是对目标进行分类,从 1 到 5,相当于 5 个类。 I used random forest with 98% accuracy but the validation is 63% which is not satisfiable.我使用了 98% 准确率的随机森林,但验证为 63%,这是不能令人满意的。 That's why I decided to create a deep model and I created a model with 3 layers.这就是为什么我决定创建一个深度 model 并创建一个具有 3 层的 model。 The result of loss is satisfiable around 6.7*10e-4 but the accuracy is fixed with zero.损失的结果在 6.7*10e-4 左右是可以满足的,但精度固定为零。 I think there is some thing wrong in my code.我认为我的代码有问题。 So, what's the problem?所以有什么问题?

def build_and_compile_model(norm):
    model = keras.Sequential([
    norm,
    layers.Dense(32, activation='relu'),
    layers.Dense(32, activation='relu'),
    layers.Dense(1,activation='sigmoid')
    ])
 model.compile(optimizer='sgd',
          loss='binary_crossentropy',
          metrics=[tf.keras.metrics.Accuracy()])
return model

def plot_acc(history):
    plt.plot(history.history['accuracy'], label='accuracy')
    plt.plot(history.history['val_accuracy'], label='val_accuracy')
    plt.ylim([0, 1])
    plt.xlabel('Epoch')
    plt.ylabel('Accuracy [GSR]')
    plt.legend()
    plt.grid(True)

dnn_qoe_model = build_and_compile_model(feature_normalizer)
dnn_qoe_model.summary()

history = dnn_qoe_model.fit(
          train_features[:22], train_labels,
          validation_split=0.2,
          verbose=0, epochs=100)
plot_acc(history)

the loss plot损失 plot

You are using loss='binary_crossentropy' and layers.Dense(1,activation='sigmoid') , which are used for binary classification problems.您正在使用loss='binary_crossentropy'layers.Dense(1,activation='sigmoid') ,它们用于二进制分类问题。

Since you are looking to predict one of 5 classes, you are looking at a multi class problem.由于您要预测 5 个类别之一,因此您正在查看多 class 问题。

If your target is one hot encoded which would look like so: [0,1,0,0,0] for one class, you should use layers.Dense(5,activation='softmax') and loss='categorical_crossentropy' .如果您的目标是一个看起来像这样的热编码: [0,1,0,0,0]对于一个 class,您应该使用layers.Dense(5,activation='softmax')loss='categorical_crossentropy'

If your target isn't one hot encoded, which means the response is an integer referring to the class number, which would be [1] (position of the positive class) in the previous example, you should use layers.Dense(5,activation='softmax') , and change the loss function to loss='sparse_categorical_crossentropy' , as your target variable is encoded as a sparse vector (refers to the index of the item containing a 1 in a vector of zeros)如果您的目标不是一个热编码,这意味着响应是一个 integer 指的是 class 编号,在上一个示例中将是[1] (正类的位置),您应该使用layers.Dense(5,activation='softmax') ,并将损失 function 更改为loss='sparse_categorical_crossentropy' ,因为您的目标变量被编码为稀疏向量(指在零向量中包含 1 的项目的索引)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM