简体   繁体   English

encog-train.getError()和network.calculateError(dataset)之间的差异

[英]encog -difference between train.getError() and network.calculateError(dataset)

I have used both functions when training a network and both give approximately same values for each epoch. 训练网络时,我已经使用了这两个功能,并且每个时期的值都大致相同。 They are not exactly the same. 它们并不完全相同。 I used network.calculateError(trainingdata). 我使用network.calculateError(trainingdata)。

What is the difference between getError() function and calculateError() function? getError()函数和calculateError()函数有什么区别? Does getError() gives MSE for 1 epoch during training? 在训练过程中,getError()会给MSE 1个纪元吗?

train.getError Error is calculated BEFORE training is applied by the call to iteration. train.getError在调用迭代应用训练之前计算出错误。 See the documentation here BasicTraining.getError() 请参阅此处的文档BasicTraining.getError()

Whereas network.calculateError() calculates the error of the network using a regression calculator. network.calculateError()使用回归计算器计算网络的误差。 This is why they will differ slightly. 这就是为什么它们会略有不同的原因。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM