简体   繁体   中英

Incompatible dimensions in LSTM Keras layer

I'm coding a sequence to sequence model with Keras and I'm getting this error:

ValueError: Input 0 of layer lstm_59 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 20]

This is my data:

print(database['sentence1'][0], database['sentence2'][0])

>>> 'It does not matter how you win it , just as long as you do .',
>>> 'It does not matter how you win , only as long as you do it .'

I create an ordinal encoding for my data (each word is a category), so I create a dictionary for the input and target sentences, this are some variables shapes:

number of samples = 2500
unique_input_words = 12738
unique_output_words = 12230
input_length = 20
output_length = 20
encoding_input.shape = (2500, 20)
decoding_input.shape = (2500, 20)
decoding_output.shape = (2500, 20)

Basically the encoding/decoding arrays are list of 2500 samples, each sample has 20 elements of length, (decoded will return a sentence):

print(encoding_input[0])
[12049  5684  3021 11494  8362  8598  8968  8371  3622  5583  8362  840  4061  8917 11710  4860  4491  4860  6411  4166]

This is my RNN model using LSTM layers (using the functional Keras API):

def create_model(
        input_length=20,
        output_length=20):

    encoder_input = tf.keras.Input(shape=(None, input_length,))
    decoder_input = tf.keras.Input(shape=(None, output_length,))

    encoder, state_h, state_c = tf.keras.layers.LSTM(64, return_state=True, return_sequences=False)(encoder_input)

    decoder = tf.keras.layers.LSTM(64, return_sequences=True)(decoder_input, initial_state=[state_h, state_c])
    decoder = tf.keras.layers.Dense(20, activation="softmax")(decoder)

    model = tf.keras.Model(inputs=[encoder_input, decoder_input], outputs=[decoder])
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

    return model

model = create_model() 

If I fit the model with my data:

model.fit([encoder_input, decoder_input],
      decoder_output,
      batch_size=64,
      epochs=5)

First I get this warning:

WARNING:tensorflow:Model was constructed with shape (None, None, 20) for input Tensor("input_67:0", shape=(None, None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 20).
WARNING:tensorflow:Model was constructed with shape (None, None, 20) for input Tensor("input_68:0", shape=(None, None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 20).

And then the whole traceback:

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
        return step_function(self, iterator)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
        outputs = model.train_step(data)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:747 train_step
        y_pred = self(x, training=True)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:985 __call__
        outputs = call_fn(inputs, *args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:386 call
        inputs, training=training, mask=mask)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py:508 _run_internal_graph
        outputs = node.layer(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/recurrent.py:663 __call__
        return super(RNN, self).__call__(inputs, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:976 __call__
        self.name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:180 assert_input_compatibility
        str(x.shape.as_list()))

    ValueError: Input 0 of layer lstm_59 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 20]

Model.summary() :

Model: "functional_45"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_67 (InputLayer)           [(None, None, 20)]   0                                            
__________________________________________________________________________________________________
input_68 (InputLayer)           [(None, None, 20)]   0                                            
__________________________________________________________________________________________________
lstm_59 (LSTM)                  [(None, 64), (None,  21760       input_67[0][0]                   
__________________________________________________________________________________________________
lstm_60 (LSTM)                  (None, None, 64)     21760       input_68[0][0]                   
                                                                 lstm_59[0][1]                    
                                                                 lstm_59[0][2]                    
__________________________________________________________________________________________________
dense_22 (Dense)                (None, None, 20)     1300        lstm_60[0][0]                    
==================================================================================================
Total params: 44,820
Trainable params: 44,820
Non-trainable params: 0
__________________________________________________________________________________________________

I know probably the error happens because of the dimension of my output but I actually have tried a lot of solutions and none of them had worked.

The expected dimension is 3 while your input dimension is 2, as per the error.

LSTM requires the input to be in the following shape.

LSTM Call Arguments :

  • inputs : A 3D tensor with shape [batch, timesteps, feature] .

The feature dimension usually contains the embedding vector but your code doesn't have any embeddings so your third dimension should be 1. You can use the tf.expand_dims method to add a dimension at the end of your input: (2500,20,1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM