繁体   English   中英

Tensorflow LSTM抛出ValueError:Shape()的等级必须至少为2

[英]Tensorflow LSTM throws ValueError: Shape () must have rank at least 2

尝试运行时,抛出以下异常(ValueError)

ValueError: Shape () must have rank at least 2

这是针对以下行抛出的:

states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

这里定义cell位置:

cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)

看看RNNTesor_shape的规则,我可以看到这是某种Tensor维度形状问题。 据我所知,它没有将BasicLSTMCell视为秩2矩阵?

完全错误:

/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/glennhealy/PycharmProjects/firstRNNTest/LSTM-RNN.py
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)
Traceback (most recent call last):
  File "/Users/glennhealy/PycharmProjects/firstRNNTest/LSTM-RNN.py", line 42, in <module>
    states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py", line 1181, in static_rnn
    input_shape = first_input.get_shape().with_rank_at_least(2)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 670, in with_rank_at_least
    raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape () must have rank at least 2

Process finished with exit code 1

代码:

state_size = 4
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)
states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

Tensorflow 1.2.1 Python 3.6 NumPy

更新更多信息:

考虑到@Maxim给出的建议,我可以看到问题是我的input_series ,这导致了形状问题,但是,我似乎无法input_series他的建议。

更多信息可以帮助我了解是否可以理解如何解决此问题:

以下是我的BatchY和BatchX占位符的替代品吗?

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs,         dtype=tf.float32)

那么,我是否必须对以下内容进行更改以反映以下语法?

batchX_placeholder = tf.placeholder(tf.int32, [batch_size,      truncated_backprop_length])
batchY_placeholder = tf.placeholder(tf.float32, [batch_size,    truncated_backprop_length])

#unpacking the columns:
labels_series = tf.unstack(batchY_placeholder, axis=1)
inputs_series = tf.split(1, truncated_backprop_length, batchX_placeholder)

#Forward pass
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=True)
states_series, current_state = tf.contrib.rnn.static_rnn(cell, inputs_series, init_state)

losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels) for logits, labels in zip(logits_series,labels_series)]
total_loss = tf.reduce_mean(losses)

是的,问题在于inputs_series 根据错误,它是一个带有shape ()的张量,即只是一个数字。

来自tf.nn.static_rnn文档:

inputsinputs的长度T列表,每个输入都是形状张量[batch_size, input_size]或这些元素的嵌套元组。

在大多数情况下,您希望inputs[seq_length, None, input_size] ,其中:

  • seq_length是序列长度或LSTM细胞的数量。
  • None代表批量大小(任何)。
  • input_size是每个单元格的input_size数。

因此,请确保您的占位符(以及inputs_series转换的inputs_series )具有适当的形状。 例:

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_neurons)
output_seqs, states = tf.nn.static_rnn(basic_cell, X_seqs, dtype=tf.float32)

更新:

这是分裂张量的错误方法:

# WRONG!
inputs_series = tf.split(1, truncated_backprop_length, batchX_placeholder)

你应该这样做(注意参数的顺序):

inputs_series = tf.split(batchX_placeholder, truncated_backprop_length, axis=1)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM