簡體   English   中英

Tensorflow LSTM拋出ValueError:Shape()的等級必須至少為2

[英]Tensorflow LSTM throws ValueError: Shape () must have rank at least 2

嘗試運行時,拋出以下異常(ValueError)

ValueError: Shape () must have rank at least 2

這是針對以下行拋出的:

states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

這里定義cell位置:

cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)

看看RNNTesor_shape的規則,我可以看到這是某種Tensor維度形狀問題。 據我所知,它沒有將BasicLSTMCell視為秩2矩陣?

完全錯誤:

/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/glennhealy/PycharmProjects/firstRNNTest/LSTM-RNN.py
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)
Traceback (most recent call last):
  File "/Users/glennhealy/PycharmProjects/firstRNNTest/LSTM-RNN.py", line 42, in <module>
    states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py", line 1181, in static_rnn
    input_shape = first_input.get_shape().with_rank_at_least(2)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 670, in with_rank_at_least
    raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape () must have rank at least 2

Process finished with exit code 1

代碼:

state_size = 4
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)
states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

Tensorflow 1.2.1 Python 3.6 NumPy

更新更多信息:

考慮到@Maxim給出的建議,我可以看到問題是我的input_series ,這導致了形狀問題,但是,我似乎無法input_series他的建議。

更多信息可以幫助我了解是否可以理解如何解決此問題:

以下是我的BatchY和BatchX占位符的替代品嗎?

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs,         dtype=tf.float32)

那么,我是否必須對以下內容進行更改以反映以下語法?

batchX_placeholder = tf.placeholder(tf.int32, [batch_size,      truncated_backprop_length])
batchY_placeholder = tf.placeholder(tf.float32, [batch_size,    truncated_backprop_length])

#unpacking the columns:
labels_series = tf.unstack(batchY_placeholder, axis=1)
inputs_series = tf.split(1, truncated_backprop_length, batchX_placeholder)

#Forward pass
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)
states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)]
total_loss = tf.reduce_mean(losses)

是的,問題在於inputs_series 根據錯誤,它是一個帶有shape ()的張量,即只是一個數字。

來自tf.nn.static_rnn文檔:

inputsinputs的長度T列表,每個輸入都是形狀張量[batch_size, input_size]或這些元素的嵌套元組。

在大多數情況下,您希望inputs[seq_length, None, input_size] ,其中:

  • seq_length是序列長度或LSTM細胞的數量。
  • None代表批量大小(任何)。
  • input_size是每個單元格的input_size數。

因此,請確保您的占位符(以及inputs_series轉換的inputs_series )具有適當的形狀。 例:

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs, dtype=tf.float32)

更新:

這是分裂張量的錯誤方法:

# WRONG!
inputs_series = tf.split(1, truncated_backprop_length, batchX_placeholder)

你應該這樣做(注意參數的順序):

inputs_series = tf.split(batchX_placeholder, truncated_backprop_length, axis=1)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM