简体   繁体   English

在Conv1D keras中合并6个输入

[英]Merge 6 inputs in Conv1D keras

I have written a structure for Conv1D in keras. 我已经在keras中为Conv1D编写了一个结构。 I want to merge the 6 different inputs of same shape. 我想合并相同形状的6个不同输入。 Previously, Merge([ model1, model2, model3, model4, model5, model6], mode = 'concat') worked just fine but after new updates, I cant use Merge anymore. 以前, Merge([ model1, model2, model3, model4, model5, model6], mode = 'concat')可以正常工作,但是在进行新更新后,我无法再使用Merge。

Concatenate can be used as follows, 串联可以如下使用,

from keras.layers import Concatenate model = Concatenate([ model1, model2, model3, model4, model5, model6])

But I want to add Dense layers before the softmax layer to this merged model, which I cant add to Concatenate as it accepts only tensor inputs. 但是我想在softmax层之前向此合并模型添加Dense层,我无法将其添加到Concatenate中,因为它仅接受张量输入。

How do I merge the 6 inputs before passing it to 2 dense layers and softmax layer?? 我如何合并6个输入,然后再将其传递到2个密集层和softmax层?

My current code is as follows, 我当前的代码如下,

input_shape = (64,250)

model1 = Sequential()
model1.add(Conv1D(64, 2, activation='relu', input_shape=input_shape))
model1.add(Conv1D(64, 2, activation='relu'))
model1.add(MaxPooling1D(2))
model1.add(Dropout(0.75))
model1.add(Flatten())

model2 = Sequential()
model2.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model2.add(Conv1D(128, 2, activation='relu'))
model2.add(MaxPooling1D(2))
model2.add(Dropout(0.75))
model2.add(Flatten())

model3 = Sequential()
model3.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model3.add(Conv1D(128, 2, activation='relu'))
model3.add(MaxPooling1D(2))
model3.add(Dropout(0.75))
model3.add(Flatten())

model4 = Sequential()
model4.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model4.add(Conv1D(128, 2, activation='relu'))
model4.add(MaxPooling1D(2))
model4.add(Dropout(0.75))
model4.add(Flatten())

model5 = Sequential()
model5.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model5.add(Conv1D(128, 2, activation='relu'))
model5.add(MaxPooling1D(2))
model5.add(Dropout(0.75))
model5.add(Flatten())

model6 = Sequential()
model6.add(Conv1D(128, 2, activation='relu', input_shape=input_shape))
model6.add(Conv1D(128, 2, activation='relu'))
model6.add(MaxPooling1D(2))
model6.add(Dropout(0.75))
model6.add(Flatten())

from keras.layers import Concatenate
model = Concatenate([ model1, model2, model3, model4, model5, model6])
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.75))

model.add(Dense(40, activation='softmax'))
opt = keras.optimizers.adam(lr=0.001, decay=1e-6)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit([d1, d2, d3, d4, d5, d6], label, validation_split=0.2, batch_size=25, epochs=30)

The way you are calling Concatenate Function is not correct. 您调用串联函数的方式不正确。 Concatenate expects one argument which specifies the axis of concatenation. 串联需要一个指定串联轴的参数。 What you are trying to achieve can be done using keras's functional API. 您可以尝试使用keras的功能性API来实现。 just change the following code 只需更改以下代码

model = Concatenate([ model1, model2, model3, model4, model5, model6])
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.75))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.75))

model.add(Dense(40, activation='softmax'))

to

merged = Concatenate()([ model1.output, model2.output, model3.output, model4.output, model5.output, model6.output])

merged = Dense(512, activation='relu')(merged)
merged = Dropout(0.75)(merged)
merged = Dense(1024, activation='relu')(merged)
merged = Dropout(0.75)(merged)

merged = Dense(40, activation='softmax')(merged)

model = Model(inputs=[model1.input, model2.input, model3.input, model4.input, model5.input, model6.input], outputs=merged)

NB NB

Though it is not the question being asked, I've noticed that you have been using very large dropout rate( But this may depend on the problem you are trying to solve). 虽然这不是要问的问题,但我注意到您一直使用非常高的辍学率(但这可能取决于您要解决的问题)。 0.75 drop rate means you are dropping 75% of the neurons while training. 0.75的下降率意味着您在训练时正在下降75%的神经元。 Please consider using small drop rate because the model might not converge. 请考虑使用较小的丢弃率,因为该模型可能无法收敛。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM