繁体   English   中英

使用LSTM RNN在tensorflow中进行分类,ValueError:Shape(1、10、5)必须具有等级2

[英]classification with LSTM RNN in tensorflow, ValueError: Shape (1, 10, 5) must have rank 2

我正在尝试在张量流中设计一个简单的lstm。 我想将数据序列分类为从1到10的类。

我有10个时间戳和数据X。我现在只取一个序列,所以我的批处理大小=1。在每个时期,都会生成一个新序列。 例如X是一个像这样的numpy数组-

X [[ 2.52413028  2.49449348  2.46520466  2.43625973  2.40765466  2.37938545
     2.35144815  2.32383888  2.29655379  2.26958905]]

为了使其适合lstm输入,我首先将其转换为张量,然后对其进行重塑(batch_size,sequence_lenght,输入尺寸)-

X= np.array([amplitude * np.exp(-t / tau)])
print 'X', X

#Sorting out the input
train_input = X
train_input = tf.convert_to_tensor(train_input)
train_input = tf.reshape(train_input,[1,10,1])
print 'ti', train_input

对于输出,我正在生成一个在1到10的类范围内的热编码标签。

#------------sorting out the output
train_output= [int(math.ceil(tau/resolution))]
train_output= one_hot(train_output, num_labels=10)
print 'label', train_output

train_output = tf.convert_to_tensor(train_output)

>>label [[ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]]

然后,我为张量流图创建了占位符,制作了lstm单元,并给出了权重和偏差-

data = tf.placeholder(tf.float32, shape= [batch_size,len(t),1])
target = tf.placeholder(tf.float32, shape = [batch_size, num_classes])

cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
output, state = rnn.dynamic_rnn(cell, data, dtype=tf.float32)

weight = tf.Variable(tf.random_normal([batch_size, num_classes, 1])),
bias = tf.Variable(tf.random_normal([num_classes]))

#training
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(prediction))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)

到目前为止,我已经编写了代码,并且在培训步骤中出现了错误。 与输入形状有关吗? 这是回溯-

追溯(最近一次通话):

  File "/home/raisa/PycharmProjects/RNN_test1/test3.py", line 66, in <module>
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1036, in matmul
name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 911, in _mat_mul
transpose_b=transpose_b, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2156, in create_op
set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1612, in set_shapes_for_outputs
shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/common_shapes.py", line 81, in matmul_shape
a_shape = op.inputs[0].get_shape().with_rank(2)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 625, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (1, 10, 5) must have rank 2

查看您的代码,您的rnn输出的尺寸应为batch_size x 1 x num_hidden而您的w的尺寸应为batch_size x num_classes x 1但是您希望将两者的乘积为batcH_size x num_classes

您可以尝试output = tf.reshape(output, [batch_size, num_hidden])weight = tf.Variable(tf.random_normal([num_hidden, num_classes])) ,让我知道怎么回事吗?

如果使用TF> = 1.0,则可以利用tf.contrib.rnn库和OutputProjectionWrapper将完全连接的层添加到RNN的输出中。 就像是:

# Network definition.
cell = tf.contrib.rnn.LSTMCell(num_hidden)
cell = tf.contrib.rnn.OutputProjectionWrapper(cell, num_classes)  # adds an output FC layer for you
output, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)

# Training.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=targets)
cross_entropy = tf.reduce_sum(cross_entropy)
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)

请注意,我使用的是softmax_cross_entropy_with_logits而不是使用prediction操作并手动计算交叉熵。 它应该更加有效和强大。

OutputProjectionWrapper基本上执行相同的操作,但是它可能有助于减轻一些麻烦。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM