簡體   English   中英

使用LSTM RNN在tensorflow中進行分類,ValueError:Shape(1、10、5)必須具有等級2

[英]classification with LSTM RNN in tensorflow, ValueError: Shape (1, 10, 5) must have rank 2

我正在嘗試在張量流中設計一個簡單的lstm。 我想將數據序列分類為從1到10的類。

我有10個時間戳和數據X。我現在只取一個序列,所以我的批處理大小=1。在每個時期,都會生成一個新序列。 例如X是一個像這樣的numpy數組-

X [[ 2.52413028  2.49449348  2.46520466  2.43625973  2.40765466  2.37938545
     2.35144815  2.32383888  2.29655379  2.26958905]]

為了使其適合lstm輸入,我首先將其轉換為張量,然后對其進行重塑(batch_size,sequence_lenght,輸入尺寸)-

X= np.array([amplitude * np.exp(-t / tau)])
print 'X', X

#Sorting out the input
train_input = X
train_input = tf.convert_to_tensor(train_input)
train_input = tf.reshape(train_input,[1,10,1])
print 'ti', train_input

對於輸出,我正在生成一個在1到10的類范圍內的熱編碼標簽。

#------------sorting out the output
train_output= [int(math.ceil(tau/resolution))]
train_output= one_hot(train_output, num_labels=10)
print 'label', train_output

train_output = tf.convert_to_tensor(train_output)

>>label [[ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]]

然后,我為張量流圖創建了占位符,制作了lstm單元,並給出了權重和偏差-

data = tf.placeholder(tf.float32, shape= [batch_size,len(t),1])
target = tf.placeholder(tf.float32, shape = [batch_size, num_classes])

cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
output, state = rnn.dynamic_rnn(cell, data, dtype=tf.float32)

weight = tf.Variable(tf.random_normal([batch_size, num_classes, 1])),
bias = tf.Variable(tf.random_normal([num_classes]))

#training
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(prediction))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)

到目前為止,我已經編寫了代碼,並且在培訓步驟中出現了錯誤。 與輸入形狀有關嗎? 這是回溯-

追溯(最近一次通話):

  File "/home/raisa/PycharmProjects/RNN_test1/test3.py", line 66, in <module>
prediction = tf.nn.softmax(tf.matmul(output,weight) + bias)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1036, in matmul
name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 911, in _mat_mul
transpose_b=transpose_b, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2156, in create_op
set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1612, in set_shapes_for_outputs
shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/common_shapes.py", line 81, in matmul_shape
a_shape = op.inputs[0].get_shape().with_rank(2)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_shape.py", line 625, in with_rank
raise ValueError("Shape %s must have rank %d" % (self, rank))
ValueError: Shape (1, 10, 5) must have rank 2

查看您的代碼,您的rnn輸出的尺寸應為batch_size x 1 x num_hidden而您的w的尺寸應為batch_size x num_classes x 1但是您希望將兩者的乘積為batcH_size x num_classes

您可以嘗試output = tf.reshape(output, [batch_size, num_hidden])weight = tf.Variable(tf.random_normal([num_hidden, num_classes])) ,讓我知道怎么回事嗎?

如果使用TF> = 1.0,則可以利用tf.contrib.rnn庫和OutputProjectionWrapper將完全連接的層添加到RNN的輸出中。 就像是:

# Network definition.
cell = tf.contrib.rnn.LSTMCell(num_hidden)
cell = tf.contrib.rnn.OutputProjectionWrapper(cell, num_classes)  # adds an output FC layer for you
output, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)

# Training.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=targets)
cross_entropy = tf.reduce_sum(cross_entropy)
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)

請注意,我使用的是softmax_cross_entropy_with_logits而不是使用prediction操作並手動計算交叉熵。 它應該更加有效和強大。

OutputProjectionWrapper基本上執行相同的操作,但是它可能有助於減輕一些麻煩。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM