简体   繁体   中英

Error 'Input 0 is incompatible with layer conv1d_48: expected ndim=3, found ndim=2' when adding Conv1D layer

I am trying to construct the following model:

model = Sequential()
model.add(Embedding(input_dim = num_top_words, output_dim = 64, input_length = input_length))
model.add(LSTM(100, activation = 'relu'))
model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))
model.add(MaxPooling1D())
model.add(Dense(5, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])

But I get the following error when running it:

Input 0 is incompatible with layer conv1d_48: expected ndim=3, found ndim=2

which points out that there is an error at the following line:

model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))

What might be the problem?

The problem is that currently the output shape of LSTM layer is (None, 100) , however, as the error suggests, Conv1D layer like LSTM layer expects a 3D input of shape (None, n_steps, n_features) . So one way to resolve this is to pass return_sequences=True to LSTM layer to have the output of each timestep and therefore its output would be 3D:

model.add(LSTM(100, activation = 'relu', return_sequences=True))

Alternatively, you can put the Conv1D and MaxPooling1D layers before the LSTM layer (which may be even better than the current architecture, since one usage of Conv1D plus pooling layers is to reduce the dimension of LSTM layer's input and hence reduce the computational complexity):

model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))
model.add(MaxPooling1D())
model.add(LSTM(100, activation = 'relu'))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM