简体   繁体   中英

Evaluation of Loss function returns error in LSTM model

I am trying to fit an LSTM Model for text generation with pre-trained embeddings using the tensorflow keras.Sequential library. I am having the following evaluation error:

tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [Condition x == y did not hold element-wise:] [x (sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/Shape_1:0) = ] [5 199] [y (sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/strided_slice:0) = ] [200 199]
     [[node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert (defined at <input>:161) ]] [Op:__inference_train_function_4885]

My model is as following:

def build_model(vocab_size, embedding_dim, rnn_units, batch_size, embedding_matrix):
    model = tf.keras.Sequential([
        #vocab_size = 30000, embedding_dim = 300, batch_size=64, embedding_matrix.shape = (30000, 300) 
        tf.keras.layers.Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], trainable=False, batch_input_shape=[max_len, None]),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.LSTM(rnn_units,
                        return_sequences=True,
                        stateful=True,
                        recurrent_initializer='glorot_uniform'),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.LSTM(rnn_units,
                        return_sequences=True,
                        stateful=True,
                        recurrent_initializer='glorot_uniform'),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(vocab_size)
    ])
    return model


model = build_model(
    vocab_size=len(vocab),
    embedding_dim=embedding_dim,
    rnn_units=rnn_units,
    batch_size=batch_size,
    embedding_matrix=embedding_matrix
)

optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy')
patience = 10
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
checkpoint_dir = './checkpoints'+ datetime.datetime.now().strftime("_%Y.%m.%d-%H:%M:%S")
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_prefix,
    save_weights_only=True
)

history = model.fit(text_ds, epochs=epochs, callbacks=[checkpoint_callback, early_stop], validation_data=text_ds)

After looking at other similar questions, the problem seems to be with respect to the input shape and output shape. Still, I can't understand what is wrong.

The model summary is:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (200, None, 300)          9000000   
_________________________________________________________________
dropout (Dropout)            (200, None, 300)          0         
_________________________________________________________________
lstm (LSTM)                  (200, None, 1024)         5427200   
_________________________________________________________________
dropout_1 (Dropout)          (200, None, 1024)         0         
_________________________________________________________________
lstm_1 (LSTM)                (200, None, 1024)         8392704   
_________________________________________________________________
dropout_2 (Dropout)          (200, None, 1024)         0         
_________________________________________________________________
dense (Dense)                (200, None, 30000)        30750000  
=================================================================
Total params: 53,569,904
Trainable params: 44,569,904
Non-trainable params: 9,000,000
_________________________________________________________________

The inputs and outputs shape are the following:

Output: 
(200, None, 300)
(200, None, 300)
(200, None, 1024)
(200, None, 1024)
(200, None, 1024)
(200, None, 1024)
(200, None, 30000)

Input: 
(200, None)
(200, None, 300)
(200, None, 300)
(200, None, 1024)
(200, None, 1024)
(200, None, 1024)
(200, None, 1024)

Edit:

By putting return_sequences=False in the last LSTM, I get:

tensorflow.python.framework.errors_impl.InvalidArgumentError:  Incompatible shapes: [200,199,300] vs. [5,199,300]
     [[node sequential/dropout/dropout/Mul_1 (defined at <input>:161) ]] [Op:__inference_train_function_4801]

And in this case the model summary is:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (200, None, 300)          9000000   
_________________________________________________________________
dropout (Dropout)            (200, None, 300)          0         
_________________________________________________________________
lstm (LSTM)                  (200, None, 1024)         5427200   
_________________________________________________________________
dropout_1 (Dropout)          (200, None, 1024)         0         
_________________________________________________________________
lstm_1 (LSTM)                (200, 1024)               8392704   
_________________________________________________________________
dropout_2 (Dropout)          (200, 1024)               0         
_________________________________________________________________
dense (Dense)                (200, 30000)              30750000  
=================================================================
Total params: 53,569,904
Trainable params: 44,569,904
Non-trainable params: 9,000,000
_________________________________________________________________

with inputs:

(200, None)
(200, None, 300)
(200, None, 300)
(200, None, 1024)
(200, None, 1024)
(200, 1024)
(200, 1024)

Change batch_input_shape argument:

    tf.keras.layers.Embedding(vocab_size, embedding_dim, weights=[embedding_matrix], trainable=False, , batch_input_shape=[5, None]),

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM