简体   繁体   English

连接多个卷积层

[英]Concatenate multiple Convolution Layers

Text classification by extracting tri-grams and quad-grams features of character level inputs using multiple concatenated CNN layers and passing it to BLSTM layer 通过使用多个串联的CNN层提取字符级输入的三元组和四元组特征并将其传递到BLSTM层来进行文本分类

submodels = []
for kw in (3, 4):    # kernel sizes
   model = Sequential()
   model.add(Embedding(vocab_size, 16,input_length=maxlen,input_shape=(maxlen,vocab_size))
   model.add(Convolution1D(nb_filter=64, filter_length=kw,
                 border_mode='valid', activation='relu'))


   submodels.append(model)
big_model = Sequential()
big_model.add(keras.layers.Concatenate(submodels))
big_model.add(Bidirectional(LSTM(100, return_sequences=False)))
big_model.add(Dense(n_out,activation='softmax'))

Model summary of individual conv layers: 各个转换层的模型摘要:

Layer (type)                  Output Shape              Param 
------------                  ------------              -----
embedding_49 (Embedding)      (None, 1024, 16)          592       
conv1d_41 (Conv1D)           (None, 1024, 64)           4160      

But, I am getting this error: 但是,我收到此错误:

ValueError: Input 0 is incompatible with layer conv1d_22: expected ndim=3, found ndim=4 ValueError:输入0与层conv1d_22不兼容:预期ndim = 3,找到的ndim = 4

UPDATE NOW USING FUNCTIONAL KERAS API 现在使用功能性KERAS API更新

x = Input(shape=(maxlen,vocab_size))
x=Embedding(vocab_size, 16, input_length=maxlen)(x)
x=Convolution1D(nb_filter=64, filter_length=3,border_mode='same', 
 activation='relu')(x)
x1 = Input(shape=(maxlen,vocab_size))
x1=Embedding(vocab_size, 16, input_length=maxlen)(x1)
x1=Convolution1D(nb_filter=64, filter_length=4,border_mode='same', 
activation='relu')(x1)
x2 = Bidirectional(LSTM(100, return_sequences=False))
x2=Dense(n_out,activation='softmax')(x2)
big_model = Model(input=keras.layers.Concatenate([x,x1]),output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
          metrics=['accuracy'])

Still the same error! 还是一样的错误!

from keras import Input
from keras import Model
vocab_size = 1000
maxlen = 100
n_out = 1000
input_x = Input(shape=(None,))
x=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x)
x=layers.Convolution1D(nb_filter=64, filter_length=3,border_mode='same',activation='relu')(x)
input_x1 = Input(shape=(None,))
x1=layers.Embedding(vocab_size, 16, input_length=maxlen)(input_x1)
x1=layers.Convolution1D(nb_filter=64, filter_length=4,border_mode='same', 
activation='relu')(x1)
concatenated = layers.concatenate([x,x1],axis = -1)
x2 = layers.Bidirectional(layers.LSTM(100, return_sequences=False))(concatenated)
x2=layers.Dense(n_out,activation='softmax')(x2)
big_model = Model([input_x,input_x1],output=x2)
big_model.compile(loss='categorical_crossentropy', optimizer='adadelta',
          metrics=['accuracy'])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM