简体   繁体   English

keras 神经网络的准确性差

[英]Poor accuracy with a keras neural network

I've been working with keras lately and have created a neural network.我最近一直在使用 keras 并创建了一个神经网络。 When I train this, I get an accurac of < 10%.当我训练这个时,我得到 < 10% 的准确率。 I have changed the number of layers, used different optimizers, different batch_sizes and epochs.我改变了层数,使用了不同的优化器,不同的 batch_sizes 和 epochs。 My data is normalized and therefore I don't know where the problem could be.我的数据已标准化,因此我不知道问题出在哪里。

What I've been trying to do so far: Change of the number of layers, optimizer, loss, epochs, batch_size到目前为止我一直在尝试做的事情:改变层数、优化器、损失、时期、batch_size

# Create Model
model  = Sequential()
model.add(Dense(18, input_shape=(22,), activation='relu'))
model.add(Dense(18, activation='relu'))
model.add(Dense(18, activation='relu'))
model.add(Dense(20, activation='softmax'))

X_training, X_test = X[:data_size], X[data_size:]
Y_training, Y_test = Y[:data_size], Y[data_size:]

# Compile Model
optimizer = keras.optimizers.Adam(lr=0.001)
model.compile(optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

# Fit the model
model.fit(X_training, Y_training, epochs=100, batch_size=1000)

# Evaluate the model
scores = model.evaluate(X_test, Y_test)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))


Epoch 80/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7351 - acc: 0.0800
Epoch 81/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7348 - acc: 0.0806
Epoch 82/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7347 - acc: 0.0815
Epoch 83/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7344 - acc: 0.0803
Epoch 84/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7344 - acc: 0.0812
Epoch 85/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7340 - acc: 0.0807
Epoch 86/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7339 - acc: 0.0810
Epoch 87/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7337 - acc: 0.0809
Epoch 88/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7335 - acc: 0.0820
Epoch 89/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7333 - acc: 0.0815
Epoch 90/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7331 - acc: 0.0815
Epoch 91/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7329 - acc: 0.0812
Epoch 92/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7329 - acc: 0.0817
Epoch 93/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7326 - acc: 0.0825
Epoch 94/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7325 - acc: 0.0822
Epoch 95/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7325 - acc: 0.0820
Epoch 96/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7322 - acc: 0.0822
Epoch 97/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7320 - acc: 0.0816
Epoch 98/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7319 - acc: 0.0818
Epoch 99/100
156183/156183 [==============================] - 1s 5us/step - loss: 6.7317 - acc: 0.0829
Epoch 100/100
156183/156183 [==============================] - 1s 4us/step - loss: 6.7316 - acc: 0.0838
39046/39046 [==============================] - 1s 23us/step

acc: 7.84%


Input: [ 9.11310000e+04 -9.36427789e-02  6.47541209e-01  7.56254860e-01
  6.56986599e-01  7.53902254e-01  9.12945251e-01  4.08082062e-01
  1.41120008e-01 -9.89992497e-01  0.00000000e+00  1.00000000e+00
  0.00000000e+00  0.00000000e+00  0.00000000e+00  1.00000000e+00
  0.00000000e+00  0.00000000e+00  0.00000000e+00  0.00000000e+00
  1.00000000e+00  0.00000000e+00]

Expected output / Target: [0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0]

how can I achieve a higher accuracy?我怎样才能达到更高的精度?

It doesn't make sense to use categorical_crossentropy with a sigmoid activation at the output, use softmax in this case.在输出端使用带有 sigmoid 激活的categorical_crossentropy是没有意义的,在这种情况下使用 softmax。

Also prefer relu over sigmoid for hidden layers.对于隐藏层,也更喜欢 relu 而不是 sigmoid。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM