简体   繁体   English

Keras模型似乎不起作用

[英]Keras model doesn't seem to work

I have the following keras model and when I train the model, it doesn't seem to learn from it. 我有以下keras模型,当我训练模型时,似乎没有从中学习的经验。 I asked around and got different suggestions like weights are not initialised properly or back-propogation is not happening. 我四处询问,并得到了不同的建议,例如权重未正确初始化或反向传播没有发生。 The model is: 该模型是:

model.add(Conv2D(32, (3, 3), kernel_initializer='random_uniform', activation='relu', input_shape=(x1, x2, depth)))
model.add(MaxPool2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dense(3, activation='softmax'))

I even looked at this solution but I don't seem to have done that. 我什至查看了解决方案,但似乎没有完成。 I have softmax in the end. 我最后有softmax For your reference, I have the output of the training process: 供您参考,我提供了培训过程的输出:

Epoch 1/10
283/283 [==============================] - 1s 2ms/step - loss: 5.1041 - acc: 0.6254 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 2/10
283/283 [==============================] - 0s 696us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 3/10
283/283 [==============================] - 0s 717us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 4/10
283/283 [==============================] - 0s 692us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 5/10
283/283 [==============================] - 0s 701us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 6/10
283/283 [==============================] - 0s 711us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 7/10
283/283 [==============================] - 0s 707us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 8/10
283/283 [==============================] - 0s 708us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 9/10
283/283 [==============================] - 0s 703us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 10/10
283/283 [==============================] - 0s 716us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc

This is how I'm compiling it: 这就是我的编译方式:

sgd = optimizers.SGD(lr=0.001, decay=1e-4, momentum=0.05, nesterov=True)

model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

Any suggestions? 有什么建议么? Something I'm missing? 我缺少什么? I have properly initialised the weights and keras seems to take care of backprop. 我已经正确地初始化了砝码,而喀拉拉邦似乎在照顾反向防护。 What am I missing? 我想念什么?

I found the solution. 我找到了解决方案。 I had to normalise/scale the images for proper training. 我必须对图像进行归一化/缩放以进行适当的训练。 It's now training properly. 现在训练正确。 Here's the link that helped me with it. 这是对我有帮助的链接

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM