简体   繁体   English

使用BP神经网络进行深度学习时训练时获得平坦的误差曲线

[英]Getting flat error curves while training when deep learning with BP neural nets

I am always getting flat curves for error plots while deep learning with conventional BP neural networks. 在使用常规BP神经网络进行深度学习时,我总是会获得误差曲线的平坦曲线。 I am using Keras sequential model with Adam optimiser. 我正在将Keras顺序模型与Adam优化器一起使用。 The NN giving overall 80% accuracy both for training and testing. NN可使培训和测试的总体准确性达到80%。 Can anyone explain why the error curves are flat (see attached figure)? 谁能解释为什么误差曲线平坦(请参见附图)? Also is there any way to improve my results? 还有什么方法可以改善我的结果?

def build_model():
  model = keras.Sequential()
  model.add(layers.Dense(128, activation=tf.nn.relu, input_shape=len(normed_train_data.keys())]))
  model.add(layers.Dense(128,activation=tf.nn.relu, input_shape=(1,)))
  model.add(layers.Dense(4))
  model.compile(loss='mean_squared_error', optimizer='Adam',metrics=['mae', 'mse','accuracy'])
  return model

def plot_history(history):
   hist = pd.DataFrame(history.history)
   hist['epoch'] = history.epoch
   plt.figure()
   plt.xlabel('Epoch')
   plt.ylabel('Mean Abs Error [per]')
   plt.plot(hist['epoch'], hist['mean_absolute_error'],label='Train Error')
   plt.plot(hist['epoch'], hist['val_mean_absolute_error'],label = 'Val Error')
   plt.legend()
   plt.ylim([0,200])
   plt.show()

And in the main function, 在主要功能上

model = build_model()
model.summary()
history = model.fit(normed_train_data, train_labels,epochs=EPOCHS,validation_split = 0.2, verbose=0,callbacks=[PrintDot()])
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plot_history(history)

error plots : 误差图:

在此处输入图片说明

Error plot with reduced learning rate 学习率降低的错误图

It is difficult to assess that without having more information about your data, can you share a sample? 如果没有更多有关您的数据的信息,很难评估是否可以共享样本? But I'd hazard a guess that your model overfits very quickly. 但我冒昧地猜测您的模型会很快过拟合。 Things that you can try: 您可以尝试的事情:

  • model simplification -- try removing one layer, or using less units for starters 简化模型-尝试删除一层,或使用较少的单元作为启动器
  • different optimizer, try sgd with different learning rates 不同的优化器,尝试以不同的学习率进行sgd
  • different metrics (try removing one by one) 不同的指标(尝试一一删除)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 训练深度学习 model 时出错 - Error while training a deep learning model 在 keras 中同时训练神经网络并让它们在训练时共同分享损失? - Training neural nets simultaneously in keras and have them share losses jointly while training? 在训练深度学习时配置进度条 - Configuring a progress bar while training for Deep Learning 在训练机器学习模型进行垃圾邮件检测时出现索引错误 - Getting an index error while training machine learning model for spam detection 深度学习和神经网络 - Deep learning and neural network 在python中的神经网络中训练数据时出现断言错误? - Getting Assertion Error when training data in neural network in python? 训练神经网络,我无法弄清楚我的学习曲线 - Training a neural network, I can't figure out my learning curves 在使用 tensorflow 库训练深度学习模型时出现错误:ResourceExhaustedError OOM on gpu(128 gb RAM) 请帮助我 - While training deep learning model using tensorflow library i am getting error: ResourceExhaustedError OOM on gpu(128 gb RAM) Kindly help me 使用嵌入层创建了 Keras 深度学习 model 但在训练时返回错误 - Created a Keras deep learning model using Embedding layer but returned an error while training 在访问 TFRecords 以训练深度学习模型时被拒绝访问 - Getting access denied on accessing TFRecords for training a Deep Learning model
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM