简体   繁体   English

Tensorflow:ValueError:Shape必须是等级2,但是等级3

[英]Tensorflow : ValueError: Shape must be rank 2 but is rank 3

I'm new to tensorflow and I'm trying to update some code for a bidirectional LSTM from an old version of tensorflow to the newest (1.0), but I get this error: 我是tensorflow的新手,我正在尝试将一些双向LSTM的代码从旧版本的tensorflow更新到最新版本(1.0),但是我得到了这个错误:

Shape must be rank 2 but is rank 3 for 'MatMul_3' (op: 'MatMul') with input shapes: [100,?,400], [400,2]. 形状必须是等级2,但对于'MatMul_3'(op:'MatMul')具有输入形状的等级3:[100,?,400],[400,2]。

The error happens on pred_mod. 该错误发生在pred_mod上。

    _weights = {
    # Hidden layer weights => 2*n_hidden because of foward + backward cells
        'w_emb' : tf.Variable(0.2 * tf.random_uniform([max_features,FLAGS.embedding_dim], minval=-1.0, maxval=1.0, dtype=tf.float32),name='w_emb',trainable=False),
        'c_emb' : tf.Variable(0.2 * tf.random_uniform([3,FLAGS.embedding_dim],minval=-1.0, maxval=1.0, dtype=tf.float32),name='c_emb',trainable=True),
        't_emb' : tf.Variable(0.2 * tf.random_uniform([tag_voc_size,FLAGS.embedding_dim], minval=-1.0, maxval=1.0, dtype=tf.float32),name='t_emb',trainable=False),
        'hidden_w': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'hidden_c': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'hidden_t': tf.Variable(tf.random_normal([FLAGS.embedding_dim, 2*FLAGS.num_hidden])),
        'out_w': tf.Variable(tf.random_normal([2*FLAGS.num_hidden, FLAGS.num_classes]))}

    _biases = {
         'hidden_b': tf.Variable(tf.random_normal([2*FLAGS.num_hidden])),
         'out_b': tf.Variable(tf.random_normal([FLAGS.num_classes]))}


    #~ input PlaceHolders
    seq_len = tf.placeholder(tf.int64,name="input_lr")
    _W = tf.placeholder(tf.int32,name="input_w")
    _C = tf.placeholder(tf.int32,name="input_c")
    _T = tf.placeholder(tf.int32,name="input_t")
    mask = tf.placeholder("float",name="input_mask")

    # Tensorflow LSTM cell requires 2x n_hidden length (state & cell)
    istate_fw = tf.placeholder("float", shape=[None, 2*FLAGS.num_hidden])
    istate_bw = tf.placeholder("float", shape=[None, 2*FLAGS.num_hidden])
    _Y = tf.placeholder("float", [None, FLAGS.num_classes])

    #~ transfortm into Embeddings
    emb_x = tf.nn.embedding_lookup(_weights['w_emb'],_W)
    emb_c = tf.nn.embedding_lookup(_weights['c_emb'],_C)
    emb_t = tf.nn.embedding_lookup(_weights['t_emb'],_T)

    _X = tf.matmul(emb_x, _weights['hidden_w']) + tf.matmul(emb_c, _weights['hidden_c']) + tf.matmul(emb_t, _weights['hidden_t']) + _biases['hidden_b']

    inputs = tf.split(_X, FLAGS.max_sent_length, axis=0, num=None, name='split')

    lstmcell = tf.contrib.rnn.BasicLSTMCell(FLAGS.num_hidden, forget_bias=1.0, 
    state_is_tuple=False)

    bilstm = tf.contrib.rnn.static_bidirectional_rnn(lstmcell, lstmcell, inputs, 
    sequence_length=seq_len, initial_state_fw=istate_fw, initial_state_bw=istate_bw)


    pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] for item in bilstm]

Any help appreciated. 任何帮助赞赏。

For anyone encountering this issue in the future, the snippet above should not be used. 对于将来遇到此问题的任何人, 不应使用上面的代码段。

From tf.contrib.rnn.static_bidirectional_rnn v1.1 documentation: tf.contrib.rnn.static_bidirectional_rnn v1.1文档:

Returns: 返回:

A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. 元组(outputs, output_state_fw, output_state_bw)其中:outputs是输出的长度T列表(每个输入一个),它们是深度级联的前向和后向输出。 output_state_fw is the final state of the forward rnn. output_state_fw是前向rnn的最终状态。 output_state_bw is the final state of the backward rnn. output_state_bw是后向rnn的最终状态。

The list comprehension above is expecting LSTM outputs, and the correct way to get those is this: 上面的列表理解是期待LSTM输出,正确的方法是获得这些:

outputs, _, _ = tf.contrib.rnn.static_bidirectional_rnn(lstmcell, lstmcell, ...)
pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] 
            for item in outputs]

This will work, because each item in outputs has the shape [batch_size, 2 * num_hidden] and can be multiplied with the weights by tf.matmul() . 这将起作用,因为outputs中的每个item都具有[batch_size, 2 * num_hidden]的形状[batch_size, 2 * num_hidden]并且可以通过tf.matmul()与权重相乘。


Add-on: from tensorflow v1.2+, the recommended function to use is in another package: tf.nn.static_bidirectional_rnn . 附加组件:从tensorflow v1.2 +,推荐使用的函数在另一个包中: tf.nn.static_bidirectional_rnn The returned tensors are the same, so the code doesn't change much: 返回的张量是相同的,因此代码变化不大:

outputs, _, _ = tf.nn.static_bidirectional_rnn(lstmcell, lstmcell, ...)
pred_mod = [tf.matmul(item, _weights['out_w']) + _biases['out_b'] 
            for item in outputs]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 TensorFlow 推荐者 - ValueError:形状必须为 2 级,但为 3 级 - TensorFlow Recommenders - ValueError: Shape must be rank 2 but is rank 3 Tensorflow: ValueError: Shape must be rank 4 but is rank 5 - Tensorflow: ValueError: Shape must be rank 4 but is rank 5 Tensorflow:optimizer.minimize()-“ ValueError:形状必须为0级,但为1级 - Tensorflow: optimizer.minimize() - "ValueError: Shape must be rank 0 but is rank 1 Tensorflow - ValueError:Shape必须为1,但对于'ParseExample / ParseExample'为0 - Tensorflow - ValueError: Shape must be rank 1 but is rank 0 for 'ParseExample/ParseExample' Tensorflow Shape必须为1级但为2级 - Tensorflow Shape must be rank 1 but is rank 2 ValueError:形状必须为 2 级,但“MatMul”为 1 级 - ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' ValueError:Shape必须是2级,但是'MatMul'的排名是3 - ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' TensorFlow - 切片张量导致:ValueError: Shape (16491,) must have rank 3 - TensorFlow - Slicing tensor results in: ValueError: Shape (16491,) must have rank 3 使用LSTM RNN在tensorflow中进行分类,ValueError:Shape(1、10、5)必须具有等级2 - classification with LSTM RNN in tensorflow, ValueError: Shape (1, 10, 5) must have rank 2 Tensorflow LSTM抛出ValueError:Shape()的等级必须至少为2 - Tensorflow LSTM throws ValueError: Shape () must have rank at least 2
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM