简体   繁体   中英

RNN layer with unequal input and output lengths in TF/Keras

Is it possible to get variable output length from RNN, ie input_seq_length?= output_seq_length?

Here is an example showing LSTM output shape, test_rnn_output_v1 default settings - return only output for last step, test_rnn_output_v2 return output for all steps, ie I need something like test_rnn_output_v2 but with output shape (None, variable_seq_length, rnn_dim) or at least (None, max_output_seq_length, rnn_dim) .

from keras.layers import Input
from keras.layers import LSTM
from keras.models import Model


def test_rnn_output_v1():
    max_seq_length = 10
    n_features = 4
    rnn_dim = 64

    input = Input(shape=(max_seq_length, n_features))
    out = LSTM(rnn_dim)(input)

    model = Model(inputs=[input], outputs=out)

    print(model.summary())

    # (None, max_seq_length, n_features)
    # (None, rnn_dim)


def test_rnn_output_v2():
    max_seq_length = 10
    n_features = 4
    rnn_dim = 64

    input = Input(shape=(max_seq_length, n_features))
    out = LSTM(rnn_dim, return_sequences=True)(input)

    model = Model(inputs=[input], outputs=out)

    print(model.summary())

    # (None, max_seq_length, n_features)
    # (None, max_seq_length, rnn_dim)


test_rnn_output_v1()
test_rnn_output_v2()

The RNN layer by definition could not have unequal input and output lengths. However, there is a trick to achieve an unequal, but fixed, output length using two RNN layers and a RepeatVector layer in between. Here is a minimal example model which accepts input sequences of variable length and produce output sequences with a fixed and arbitrary length:

import tensorflow as tf

max_output_length = 35

inp = tf.keras.layers.Input(shape=(None, 10))
x = tf.keras.layers.LSTM(20)(inp)
x = tf.keras.layers.RepeatVector(max_output_length)(x)
out = tf.keras.layers.LSTM(30, return_sequences=True)(x)

model = tf.keras.Model(inp, out)
model.summary()

Here is the model summary:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, None, 10)]        0         
_________________________________________________________________
lstm (LSTM)                  (None, 20)                2480      
_________________________________________________________________
repeat_vector (RepeatVector) (None, 35, 20)            0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 35, 30)            6120      
=================================================================
Total params: 8,600
Trainable params: 8,600
Non-trainable params: 0
_________________________________________________________________

This structure could be used in sequence-to-sequence models in which the length of input sequences may not be necessarily the same as output sequences.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM