[英]Getting flat error curves while training when deep learning with BP neural nets
I am always getting flat curves for error plots while deep learning with conventional BP neural networks. 在使用常规BP神经网络进行深度学习时,我总是会获得误差曲线的平坦曲线。 I am using Keras sequential model with Adam optimiser. 我正在将Keras顺序模型与Adam优化器一起使用。 The NN giving overall 80% accuracy both for training and testing. NN可使培训和测试的总体准确性达到80%。 Can anyone explain why the error curves are flat (see attached figure)? 谁能解释为什么误差曲线平坦(请参见附图)? Also is there any way to improve my results? 还有什么方法可以改善我的结果?
def build_model():
model = keras.Sequential()
model.add(layers.Dense(128, activation=tf.nn.relu, input_shape=len(normed_train_data.keys())]))
model.add(layers.Dense(128,activation=tf.nn.relu, input_shape=(1,)))
model.add(layers.Dense(4))
model.compile(loss='mean_squared_error', optimizer='Adam',metrics=['mae', 'mse','accuracy'])
return model
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [per]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],label = 'Val Error')
plt.legend()
plt.ylim([0,200])
plt.show()
And in the main function, 在主要功能上
model = build_model()
model.summary()
history = model.fit(normed_train_data, train_labels,epochs=EPOCHS,validation_split = 0.2, verbose=0,callbacks=[PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plot_history(history)
error plots : 误差图:
It is difficult to assess that without having more information about your data, can you share a sample? 如果没有更多有关您的数据的信息,很难评估是否可以共享样本? But I'd hazard a guess that your model overfits very quickly. 但我冒昧地猜测您的模型会很快过拟合。 Things that you can try: 您可以尝试的事情:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.