简体   繁体   中英

Neural network in R to predict stock return

I am using neuralnet package and using neuralnet function to train my data and compute to predict.

x <- neuralnet( X15  ~ X1 + X2 + X3 + X8, norm_ind[1:15000,],2,act.fct="tanh",linear.output=TRUE)
pr <- compute(x,testdata)

The problem I am facing is pr$net.result value is almost constant for all data points.

I am predicting return of stock and providing stock real return one day ahead as target function ie X15 in formula. Output I am getting is almost constant as you can see below. Could anyone tell me what needs to be done?

1084 0.00002217204168
1085 0.00002217204168
1086 0.00002217204168
1087 0.00002217204168
1088 0.00002217204168
1089 0.00002217204168
1090 0.00002217204168
1091 0.00002217204168
1092 0.00002217204168
1093 0.00002217204168
1094 0.00002217204168
1095 0.00002217204168
1096 0.00002217204168
1097 0.00002217204168
1098 0.00002217204168
1099 0.00002217204168
1100 0.00002217204168

Before teaching a neural network via neuralnet , it is strongly advised to scale your data:

learn <- scale(learn)
# be honest and use the mean and scaling inferred from the training set -
# the test set could in principle contain only one element causing an incorrect scaling
test <- scale(test, center = attributes(learn)$`scaled:center`, scale = attributes(learn)$`scaled:scale`)
model <- neuralnet(formula, learn, ...)
compute(model, test)$net.result

Neural networks are sensitive to shifting and scaling of the data. Additionally, the initial weights are chosen randomly from a distribution alike to a standard normal one.

See, for example, chapter 3.2, "Preprocessing" (and much more) in an excellent paper by Yoshua Bengio [1].

Modern update: Modern networks usually approach this sensitivity by using normalization layers, possibly with trained parameters. The most well-known and popular is Batch Normalization [2].

[1] http://arxiv.org/abs/1206.5533

[2] https://en.wikipedia.org/wiki/Batch_normalization

I am having a similar problem, and I think that it may be due to the problem of local minima in traditional neural networks. You may have to go beyond the neuralnet package to get what you want.

The most likely problem is that you have too many input variables for the amount of training data available.

Here is good information concerning this topic

https://stats.stackexchange.com/questions/65292/r-neuralnet-compute-give-a-constant-answer

我不确定这是否是问题所在,但只有 2 个隐藏节点可能会导致此问题。

Try setting the learningrate argument in your neuralnet function to something like learningrate=0.01

The default is NULL and I've found this to cause a similar problem when carrying out similar tests using nnet()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM