简体   繁体   English

Conv2d Tensorflow 结果错误 - 准确度 = 0.0000e+00

[英]Conv2d Tensorflow results wrong - accuracy = 0.0000e+00

I am using tensorflow and keras to classify build a classification model.我正在使用 tensorflow 和 keras 来分类构建分类 model。 When running the code below it seems that the output does not seem to converge after each epoch, with the loss steadily increasing and the accuracy contantly set to 0.0000e+00.运行下面的代码时,output 似乎在每个 epoch 后都没有收敛,损失稳步增加,精度不断设置为 0.0000e+00。 I am new to machine learning and am not too sure why this is happening.我是机器学习的新手,不太清楚为什么会这样。

from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.models import Sequential
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
import numpy as np

import time
import tensorflow as tf

from google.colab import drive

drive.mount('/content/drive')
import pandas as pd 
data = pd.read_csv("hmnist_28_28_RGB.csv") 
X = data.iloc[:, 0:-1]
y = data.iloc[:, -1]

X = X / 255.0
X = X.values.reshape(-1,28,28,3)
print(X.shape)

model = Sequential()
model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))


model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors

model.add(Dense(64))

model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3)

Output Output

(378, 28, 28, 3)
Epoch 1/10
9/9 [==============================] - 4s 429ms/step - loss: -34.6735 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 2/10
9/9 [==============================] - 4s 400ms/step - loss: -1074.2162 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 3/10
9/9 [==============================] - 4s 399ms/step - loss: -7446.1872 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 4/10
9/9 [==============================] - 4s 396ms/step - loss: -30012.9553 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 5/10
9/9 [==============================] - 4s 406ms/step - loss: -89006.4180 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 6/10
9/9 [==============================] - 4s 400ms/step - loss: -221087.9078 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 7/10
9/9 [==============================] - 4s 399ms/step - loss: -480032.9313 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 8/10
9/9 [==============================] - 4s 403ms/step - loss: -956052.3375 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 9/10
9/9 [==============================] - 4s 396ms/step - loss: -1733128.9000 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 10/10
9/9 [==============================] - 4s 401ms/step - loss: -2953626.5750 - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00

You need to make several changes to your model to make it work.您需要对 model 进行一些更改才能使其正常工作。

There are 7 different labels in the dataset, so your last layer needs 7 output neurons.数据集中有 7 个不同的标签,因此您的最后一层需要 7 个 output 神经元。

For your last layer you are currently using sigmoid activation.对于您的最后一层,您当前正在使用sigmoid激活。 This is not suitable for multi-class classification.这不适用于多类分类。 Instead you should use the softmax activation.相反,您应该使用softmax激活。

As loss function you are using loss='binary_crossentropy' .作为 loss function 您正在使用loss='binary_crossentropy' This is only to be used for binary classification.这仅用于二进制分类。 In your case, since your labels consist of integers loss='sparse_categorical_crossentropy' should be used.在您的情况下,由于您的标签由整数组成loss='sparse_categorical_crossentropy'应该使用。 You can find more information here .您可以在此处找到更多信息。

With the following changes to the last lines of your code:对代码的最后几行进行以下更改:

model.add(Dense(7))
model.add(Activation('softmax'))
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3)

You'll get this training history:您将获得以下培训历史记录:

(10015, 28, 28, 3)
Epoch 1/10
220/220 [==============================] - 89s 403ms/step - loss: 1.0345 - accuracy: 0.6193 - val_loss: 1.7980 - val_accuracy: 0.4353
Epoch 2/10
220/220 [==============================] - 88s 398ms/step - loss: 0.8282 - accuracy: 0.6851 - val_loss: 3.3646 - val_accuracy: 0.0676
Epoch 3/10
220/220 [==============================] - 88s 399ms/step - loss: 0.6944 - accuracy: 0.7502 - val_loss: 2.9686 - val_accuracy: 0.1228
Epoch 4/10
220/220 [==============================] - 87s 395ms/step - loss: 0.6630 - accuracy: 0.7611 - val_loss: 3.3777 - val_accuracy: 0.0646
Epoch 5/10
220/220 [==============================] - 87s 396ms/step - loss: 0.5976 - accuracy: 0.7812 - val_loss: 2.3929 - val_accuracy: 0.2532
Epoch 6/10
220/220 [==============================] - 87s 396ms/step - loss: 0.5577 - accuracy: 0.7935 - val_loss: 2.9879 - val_accuracy: 0.2592
Epoch 7/10
220/220 [==============================] - 88s 398ms/step - loss: 0.7644 - accuracy: 0.7215 - val_loss: 2.5258 - val_accuracy: 0.2852
Epoch 8/10
220/220 [==============================] - 87s 395ms/step - loss: 0.5629 - accuracy: 0.7879 - val_loss: 2.6053 - val_accuracy: 0.3055
Epoch 9/10
220/220 [==============================] - 89s 404ms/step - loss: 0.5380 - accuracy: 0.8008 - val_loss: 2.7401 - val_accuracy: 0.1694
Epoch 10/10
220/220 [==============================] - 92s 419ms/step - loss: 0.5296 - accuracy: 0.8065 - val_loss: 3.7208 - val_accuracy: 0.0529

The model still needs to be optimized to achieve better results, but in general it works. model 仍然需要优化才能达到更好的效果,但总的来说它可以工作。

I was using this file for the training.我正在使用此文件进行培训。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 keras val_loss:0.0000e+00 - val_accuracy:0.0000e+00 - keras val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 如何解决 LSTM 问题中的 loss: nan &accuracy: 0.0000e+00? 张量流 2.x - How to solve loss: nan & accuracy: 0.0000e+00 in a LSTM problem? Tensorflow 2.x CNN 精度:0.0000e+00 用于图像的多分类 - CNN accuracy: 0.0000e+00 for multi-classification on images 获得精度:0.0000e+00 在我的张量流 model - Getting accuracy: 0.0000e+00 in my Tensor flow model keras,val_accuracy,val_loss是loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00的问题 - keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem 为什么我的准确率总是0.0000e+00,而且损失巨大? - Why is my accuracy always 0.0000e+00, and loss and huge? Keras 损失:0.0000e+00 且精度保持不变 - Keras loss: 0.0000e+00 and accuracy stays constant 如何修复分类分类中的“val_accuracy: 0.0000e+00”? - How to fix “val_accuracy: 0.0000e+00” in categorical classification? 在张量流中使用conv2d - Using conv2d in tensorflow “损失:0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000”是什么意思? - what does "loss: 0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000" mean?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM