简体   繁体   English

R中的神经网络预测股票收益

[英]Neural network in R to predict stock return

I am using neuralnet package and using neuralnet function to train my data and compute to predict.我正在使用神经网络包并使用神经网络函数来训练我的数据并计算以进行预测。

x <- neuralnet( X15  ~ X1 + X2 + X3 + X8, norm_ind[1:15000,],2,act.fct="tanh",linear.output=TRUE)
pr <- compute(x,testdata)

The problem I am facing is pr$net.result value is almost constant for all data points.我面临的问题是pr$net.result值对于所有数据点几乎都是恒定的。

I am predicting return of stock and providing stock real return one day ahead as target function ie X15 in formula.我正在预测股票回报并提前一天提供股票实际回报作为目标函数,即公式中的X15 Output I am getting is almost constant as you can see below.我得到的输出几乎是恒定的,如下所示。 Could anyone tell me what needs to be done?谁能告诉我需要做什么?

1084 0.00002217204168
1085 0.00002217204168
1086 0.00002217204168
1087 0.00002217204168
1088 0.00002217204168
1089 0.00002217204168
1090 0.00002217204168
1091 0.00002217204168
1092 0.00002217204168
1093 0.00002217204168
1094 0.00002217204168
1095 0.00002217204168
1096 0.00002217204168
1097 0.00002217204168
1098 0.00002217204168
1099 0.00002217204168
1100 0.00002217204168

Before teaching a neural network via neuralnet , it is strongly advised to scale your data:在通过neuralnet教授神经网络之前,强烈建议缩放您的数据:

learn <- scale(learn)
# be honest and use the mean and scaling inferred from the training set -
# the test set could in principle contain only one element causing an incorrect scaling
test <- scale(test, center = attributes(learn)$`scaled:center`, scale = attributes(learn)$`scaled:scale`)
model <- neuralnet(formula, learn, ...)
compute(model, test)$net.result

Neural networks are sensitive to shifting and scaling of the data.神经网络对数据的移动和缩放很敏感。 Additionally, the initial weights are chosen randomly from a distribution alike to a standard normal one.此外,初始权重是从与标准正态分布相似的分布中随机选择的。

See, for example, chapter 3.2, "Preprocessing" (and much more) in an excellent paper by Yoshua Bengio [1].例如,参见 Yoshua Bengio [1] 的一篇优秀论文中的第 3.2 章“预处理”(以及更多)。

Modern update: Modern networks usually approach this sensitivity by using normalization layers, possibly with trained parameters.现代更新:现代网络通常通过使用归一化层来接近这种敏感性,可能使用经过训练的参数。 The most well-known and popular is Batch Normalization [2].最著名和最受欢迎的是批量归一化 [2]。

[1] http://arxiv.org/abs/1206.5533 [1] http://arxiv.org/abs/1206.5533

[2] https://en.wikipedia.org/wiki/Batch_normalization [2] https://en.wikipedia.org/wiki/Batch_normalization

I am having a similar problem, and I think that it may be due to the problem of local minima in traditional neural networks.我遇到了类似的问题,我认为可能是由于传统神经网络中的局部最小值问题。 You may have to go beyond the neuralnet package to get what you want.您可能必须超越神经网络包才能获得您想要的东西。

The most likely problem is that you have too many input variables for the amount of training data available.最可能的问题是对于可用的训练数据量,您有太多的输入变量。

Here is good information concerning this topic这是有关此主题的好信息

https://stats.stackexchange.com/questions/65292/r-neuralnet-compute-give-a-constant-answer https://stats.stackexchange.com/questions/65292/r-neuralnet-compute-give-a-constant-answer

我不确定这是否是问题所在,但只有 2 个隐藏节点可能会导致此问题。

Try setting the learningrate argument in your neuralnet function to something like learningrate=0.01尝试将你的neuralnet函数中的learningrate参数设置为learningrate=0.01东西

The default is NULL and I've found this to cause a similar problem when carrying out similar tests using nnet()默认值为 NULL,我发现在使用 nnet() 执行类似测试时会导致类似问题

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM