简体   繁体   中英

How to pass 3d Tensor to tensorflow RNN embedding_rnn_seq2seq

I'm trying to feed sentences in which each world has word2vec representation. How can I do it in tensorflow seq2seq models?

Suppose the variable

enc_inp = [tf.placeholder(tf.int32, shape=(None,10), name="inp%i" % t)
       for t in range(seq_length)]

Which has dimensions [num_of_observations or batch_size x word_vec_representation x sentense_lenght].

when I pass it to embedding_rnn_seq2seq

decode_outputs, decode_state = seq2seq.embedding_rnn_seq2seq(
    enc_inp, dec_inp, stacked_lstm, 
    seq_length, seq_length, embedding_dim)

error occurs

ValueError: Linear is expecting 2D arguments: [[None, 10, 50], [None, 50]]

Also there is a more complex problem How can i pas as input a vector, not a scalar to first cell of my RNN?

By now it looks like (when we are about any sequence)

  1. get first value of sequence (scalar)
  2. compute First layer RNN First layer embedding cell output
  3. compute First layer RNN Second layer embedding cell output
  4. etc

But this is needed:

  1. Get first value of sequence (vector)
  2. compute First layer RNN First layer cell output (as ordinary computing simple perceptron when Input is a vector)
  3. compute First layer RNN Second layer embedding cell output (as ordinary computing simple perceptron when Input is a vector)

The main point is that: seq2seq make inside themself word embedding. Here is reddit question and answer

Also, if smbd wants to use pretrained Word2Vec there are ways to do it, see:

So this can be used no only for word embedding

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM