[英]Encog Neural Network Error Never Changing
I'm getting started working with neural networks. 我开始使用神经网络了。 I have adapted the XOR example provided for my own purposes, but when I run it the error never changes.
我已经调整了为我自己的目的提供的XOR示例,但是当我运行它时,错误永远不会改变。
The function I'm trying to approximate takes 4 doubles and outputs 1 double, the 4 inputs are always positive, the output can be negative or positive (majority is positive). 我试图近似的函数需要4个双精度并输出1个双精度,4个输入总是正数,输出可以是负数或正数(多数是正数)。 For starters I am using 50 records to train the data.
对于初学者,我使用50条记录来训练数据。
Working XOR (The error goes down with each iteration) 工作异常(每次迭代时错误都会下降)
public static double[][] XORInput = {
new[] {0.0, 0.0},
new[] {1.0, 0.0},
new[] {0.0, 1.0},
new[] {1.0, 1.0}
};
public static double[][] XORIdeal = {
new[] {0.0},
new[] {1.0},
new[] {1.0},
new[] {0.0}
};
BasicNetwork network = new BasicNetwork();
network.AddLayer(new BasicLayer(null, true, 2));
network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 3));
network.AddLayer(new BasicLayer(new ActivationSigmoid(), false, 1));
network.Structure.FinalizeStructure();
network.Reset();
IMLDataSet trainingData = new BasicMLDataSet(XORInput, XORIdeal);
IMLTrain train = new ResilientPropagation(Network, trainingData);
Not Working (the error never goes down): 不工作(错误永远不会发生):
BasicNetwork network = new BasicNetwork();
network.AddLayer(new BasicLayer(null, true, 4));
network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 6));
network.AddLayer(new BasicLayer(new ActivationSigmoid(), false, 1));
network.Structure.FinalizeStructure();
network.Reset();
IMLDataSet trainingData = new BasicMLDataSet(myInput, myExpectedOutput);
IMLTrain train = new ResilientPropagation(Network, trainingData);
A few sample records of the training data: 一些训练数据的样本记录:
Input:
2.54, 3.15, 3.4, 1.73
5.3, 1.78, 3.9, 2.04
1.71, 5.4, 4.3, 2.26
1.62, 6.4, 4, 1.89
1.45, 8.4, 5.2, 2.14
Output:
5.59
11.05
6.89
10.4
-0.56
I believe that the problem is that activation function isn't firing. 我认为问题在于激活功能没有触发。 I thought it might be because ActivationSigmoid() is inappropriate for this problem, but I have tried ActivationTANH() with the exact same results.
我认为这可能是因为ActivationSigmoid()不适合这个问题,但我尝试使用完全相同的ActivationTANH()。
The problem is that my values weren't being normalised. 问题是我的价值观没有正常化。
To work with the activation functions all of your inputs and outputs must be between 1 and 0 (ActivationSigma) and -1 and 1 (ActivationTANH). 要使用激活功能,所有输入和输出必须介于1和0(ActivationSigma)以及-1和1(ActivationTANH)之间。 You need some function to normalise your values to the range that they need to be.
您需要一些函数来将值标准化为它们所需的范围。
This link was of great help to me: 这个链接对我很有帮助:
http://www.heatonresearch.com/wiki/Range_Normalization http://www.heatonresearch.com/wiki/Range_Normalization
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.