[英]Theano ValueError: y_i value out of bound
I am implementing a Neural network using Theano. 我正在使用Theano实现神经网络。 Where my input size is 64 nodes, hidden layer is 500 node, and output is 1 node. 我的输入大小为64个节点,隐藏层为500个节点,输出为1个节点。
My input is a (1,000,000 * 64) matrix, and output is (1,000,000*1) matrix. 我的输入是(1,000,000 * 64)矩阵,输出是(1,000,000 * 1)矩阵。
I am implementing my Neural network using the following tutorial. 我正在使用以下教程实现我的神经网络。 http://deeplearning.net/tutorial/mlp.html#mlp http://deeplearning.net/tutorial/mlp.html#mlp
Please help! 请帮忙!
i am getting out of bound error on the following line its causing me error 我在下一行出错了它导致我错误
##LOADING THE DATA FROM TXT FILE
train_set_x = numpy.loadtxt('x.txt', delimiter=',')
train_set_y = numpy.loadtxt('y.txt', delimiter=',')
train_set_x = theano.shared(numpy.asarray(train_set_x,
dtype=theano.config.floatX),
borrow=True)
train_set_y = theano.shared(numpy.asarray(train_set_y,
dtype=theano.config.floatX),
borrow=True)
train_set_y = T.cast(train_set_y, 'int32')
....
##TRAINING FUNCTION
train_model = theano.function(
inputs=[index],
outputs=cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
....
##TRAINING
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in range(n_train_batches):
minibatch_avg_cost = train_model(minibatch_index)
ERROR: 错误:
File "NN_main.py", line 276, in test_mlp
minibatch_avg_cost = train_model(minibatch_index)
File "C:\Users\wei\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 871, in __call__
storage_map=getattr(self.fn, 'storage_map', None))
File "C:\Users\wei\Anaconda2\lib\site-packages\theano\gof\link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "C:\Users\wei\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 859, in __call__
outputs = self.fn()
ValueError: y_i value out of bounds
Apply node that caused the error: CrossentropySoftmaxArgmax1HotWithBias(Dot22.0, b, Elemwise{Cast{int32}}.0)
Toposort index: 21
Inputs types: [TensorType(float64, matrix), TensorType(float64, vector), TensorType(int32, vector)]
Inputs shapes: [(20L, 1L), (1L,), (20L,)]
Inputs strides: [(8L, 8L), (8L,), (4L,)]
Inputs values: ['not shown', array([ 0.]), 'not shown']
Outputs clients: [[Sum{acc_dtype=float64}(CrossentropySoftmaxArgmax1HotWithBias.0)], [CrossentropySoftmax1HotWithBiasDx(Elemwise{Inv}[(0, 0)].0, CrossentropySoftmaxArgmax1HotWithBias.1, Elemwise{Cast{int32}}.0)], []]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "NN_main.py", line 332, in <module>
test_mlp()
File "NN_main.py", line 193, in test_mlp
+ L2_reg * classifier.L2_sqr
File "C:\wei\MyChessEngine\MyChessEngine\logistic_sgd.py", line 112, in negative_log_likelihood
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
如果有人仍然遇到同样的错误,我今天遇到了这个问题,原因是因为我将类标记为1而不是0。
When you add a logistic regression layer to your MLP, the number of outputs must be the number of classes you have. 向MLP添加逻辑回归层时,输出数必须是您拥有的类数。 Supposing you are dealing with binary classification, your n_out
must be 2. 假设您正在处理二进制分类,则您的n_out
必须为2。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.