[英]Error 'Input 0 is incompatible with layer conv1d_48: expected ndim=3, found ndim=2' when adding Conv1D layer
I am trying to construct the following model: 我正在尝试构建以下模型:
model = Sequential()
model.add(Embedding(input_dim = num_top_words, output_dim = 64, input_length = input_length))
model.add(LSTM(100, activation = 'relu'))
model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))
model.add(MaxPooling1D())
model.add(Dense(5, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
But I get the following error when running it: 但是运行它时出现以下错误:
Input 0 is incompatible with layer conv1d_48: expected ndim=3, found ndim=2
which points out that there is an error at the following line: 指出以下行存在错误:
model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))
What might be the problem? 可能是什么问题?
The problem is that currently the output shape of LSTM
layer is (None, 100)
, however, as the error suggests, Conv1D
layer like LSTM
layer expects a 3D input of shape (None, n_steps, n_features)
. 问题在于,当前
LSTM
层的输出形状为(None, 100)
,但是,正如错误所暗示的,像LSTM
层一样的Conv1D
层希望输入3D形状(None, n_steps, n_features)
。 So one way to resolve this is to pass return_sequences=True
to LSTM layer to have the output of each timestep and therefore its output would be 3D: 因此,解决此问题的一种方法是将
return_sequences=True
传递到LSTM层以获取每个时间步的输出,因此其输出将为3D:
model.add(LSTM(100, activation = 'relu', return_sequences=True))
Alternatively, you can put the Conv1D
and MaxPooling1D
layers before the LSTM
layer (which may be even better than the current architecture, since one usage of Conv1D
plus pooling layers is to reduce the dimension of LSTM layer's input and hence reduce the computational complexity): 或者,您可以将
Conv1D
和MaxPooling1D
层放在LSTM
层之前(这可能比当前体系结构更好,因为Conv1D
加上池化层的一种用法是减小LSTM层的输入维,从而降低计算复杂度):
model.add(Conv1D(64, kernel_size = 5, activation = 'relu'))
model.add(MaxPooling1D())
model.add(LSTM(100, activation = 'relu'))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.