简体   繁体   English

LSTM Keras功能API层输入形状错误

[英]LSTM Keras functional API layer input shape error

I'm trying to built Keras functional API LSTM layer using Multiple input-Output model using pre-trained Word Embedding for embedding layer. 我正在尝试使用多个输入-输出模型(使用预训练的词嵌入作为嵌入层)来构建Keras功能API LSTM层。

Below is my code: 下面是我的代码:

#sequential padding text data
max_review_length = 300
text_seq_train = sequence.pad_sequences(text_train, maxlen=max_review_length)
text_seq_test = sequence.pad_sequences(text_test, maxlen=max_review_length)

print(text_seq_train.shape)
print(text_seq_test.shape)

# Loading pre-trained glove embedding file
embeddings_index = dict()
f = open('gdrive/My Drive/glove.6B.100d.txt')
for line in f:
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()
print('Loaded %s word vectors.' % len(embeddings_index))

embedding_matrix = zeros((len(text_tokenizer.index_word), 100))
for word, i in text_tokenizer.index_word.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

print(embedding_matrix.shape)   #weights for embedding layer

Output : 输出:

(76473, 300)
(32775, 300)
Loaded 400000 word vectors.
(55297, 100)

LSTM part LSTM部分

input_layer = Input(shape=(300,))
embed = Embedding(input_dim = len(text_tokenizer.index_word) , output_dim = 100, input_length = len(text_seq_train[0]) ,weights=[embedding_matrix], trainable=False) (input_layer)
lstm = LSTM(100)(embed)
flat = Flatten() (lstm)

received Error statement : 收到错误声明:


ValueError                                Traceback (most recent call last)
<ipython-input-131-9118c8229a4a> in <module>()
      2 embed = Embedding(input_dim = len(text_tokenizer.index_word) , output_dim = 100, input_length = len(text_seq_train[0]) ,weights=[embedding_matrix], trainable=False) (input_layer)
      3 lstm = LSTM(100)(embed)
----> 4 flat = Flatten() (lstm)

1 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in assert_input_compatibility(self, inputs)
    325                                      self.name + ': expected min_ndim=' +
    326                                      str(spec.min_ndim) + ', found ndim=' +
--> 327                                      str(K.ndim(x)))
    328             # Check dtype.
    329             if spec.dtype is not None:

ValueError: Input 0 is incompatible with layer flatten_26: expected min_ndim=3, found ndim=2

I don't know what am I missing, where I am going wrong. 我不知道我在想什么,哪里出了问题。 any help would be appreciated thank you. 任何帮助,将不胜感激,谢谢。

You don't have to Flatten a LSTM output. 您不必展平LSTM输出。 LSTM outputs tensor with shape (batch_size, units). LSTM输出具有形状(batch_size,单位)的张量。

see : https://keras.io/layers/recurrent/ 参见: https : //keras.io/layers/recurrent/

A Jawar 贾瓦尔

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM