简体   繁体   English

keras 网络不训练

[英]keras network doesn't train

First time I try to make the simplest net.我第一次尝试制作最简单的网。 I train it for XOR.我为 XOR 训练它。 It does not work absolutely.它绝对不起作用。 Tryed everithing: different activation functions, number of layers, neurons, epoches, batches, optimisers... Everytime result is 1,1,1,1 (accuracy=0.5).尝试过:不同的激活函数、层数、神经元、时期、批次、优化器......每次结果都是 1,1,1,1(准确度 = 0.5)。 Please, help!请帮忙! What I do wrong?我做错了什么?

from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
import numpy as np

X = np.array([  [0,0],
                [0,1],
                [1,0],
                [1,1] ])
Y = np.array([[1,0,0,1]]).T

model = Sequential()
model.add(Dense(10, input_dim=2, activation='relu'))
model.add(Dense(10, activation='relu'))  
model.add(Dense(1, activation='softmax')) 

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics='accuracy')

#Traiting a model
model.fit(X, Y, epochs=100, batch_size=len(X))

# Prediction
predictions = model.predict(X)
print(predictions)

I noticed, that there are always 1/1 at the left side of the output.我注意到,输出的左侧总是有 1/1。 But, I guess, there must be something like 4/4.但是,我想,一定有像 4/4 这样的东西。 May be this is the reason?可能是这个原因? But I can't understand how to fix it...但我不明白如何解决它...

Tail of output:输出尾部:

...
...
Epoch 97/100
1/1 [==============================] - 0s 1ms/step - loss: 0.0000e+00 - accuracy: 0.5000
Epoch 98/100
1/1 [==============================] - 0s 1ms/step - loss: 0.0000e+00 - accuracy: 0.5000
Epoch 99/100
1/1 [==============================] - 0s 1ms/step - loss: 0.0000e+00 - accuracy: 0.5000
Epoch 100/100
1/1 [==============================] - 0s 1ms/step - loss: 0.0000e+00 - accuracy: 0.5000
1/1 [==============================] - 0s 165ms/step - loss: 0.0000e+00 - accuracy: 0.5000
[0.0, 0.5]
[[1.]
[1.]
[1.]
[1.]]

Thank you very much to all!非常感谢大家!

Here the working net below.下面是工作网。 Strange, that it takes too long time for training!奇怪,训练时间太长了! I remember, I did the same task but without Keras several years ago.我记得,几年前我做了同样的任务,但没有 Keras。 Training was almost instantly (of course without any GPU).训练几乎是即时的(当然没有任何 GPU)。 But here "Adam optimisation" (with "fast relu" I managed to do only 4 layers net).但是在这里“亚当优化”(使用“快速relu”我设法只做4层网络)。 Seems that that functions has the opposit effect for such simple tasks.似乎该功能对这种简单的任务具有相反的效果。

from keras.models import Sequential
from keras import initializers
 
from keras.layers import Dense
from tensorflow import keras
import numpy as np
 
X = np.array([0,0,
              0,1,
              1,0,
              1,1] )
 
X = X.reshape(4,2).astype("float32")
 
Y = np.array([1,
              0,
              0,
              1] )
Y = Y.reshape(4,1).astype("float32")
 
init_2 = initializers.TruncatedNormal(mean=0.0, stddev=0.05, seed=12345)
 
model = Sequential()
model.add(Dense(4, input_dim=2, activation='sigmoid', kernel_initializer=init_2, bias_initializer=init_2))
model.add(Dense(1, activation='sigmoid', kernel_initializer=init_2, bias_initializer=init_2)) 
 
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
 
 
#Traiting a model
model.fit(X, Y, epochs=7000, batch_size=4, verbose=0)
 
scores = model.evaluate(X, Y)
print(scores)
 
# Prediction
predictions = model.predict(X)
print(predictions)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM