簡體   English   中英

如何在Keras中結合LSTM和CNN模型

[英]How to combine LSTM and CNN models in Keras

我有帶有個人資料圖片和時間序列數據(該用戶生成的事件)的用戶。 為了進行二進制分類,我編寫了兩個模型:LSTM和CNN,它們可以獨立工作。 但是我真正想要實現的是將這些模型連接起來。

這是我的LSTM模型:

input1_length = X_train.shape[1]
input1_dim = X_train.shape[2]

input2_length = X_inter_train.shape[1]
input2_dim = X_inter_train.shape[2]

output_dim = 1

input1 = Input(shape=(input1_length, input1_dim))
input2 = Input(shape=(input2_length, input2_dim))

lstm1 = LSTM(20)(input1)
lstm2 = LSTM(10)(input2)

lstm1 = Dense(256, activation='relu')(lstm1)
lstm1 = Dropout(0.5)(lstm1)
lstm1 = Dense(12, activation='relu')(lstm1)

lstm2 = Dense(256, activation='relu')(lstm2)
#lstm2 = Dropout(0.5)(lstm2)
lstm2 = Dense(12, activation='relu')(lstm2)

merge = concatenate([lstm1, lstm2])

# interpretation model
lstm = Dense(128, activation='relu')(merge)

output = Dense(output_dim, activation='sigmoid')(lstm)

model = Model([input1, input2], output)
optimizer = RMSprop(lr=1e-3, decay=0.0)

model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()

CNN模型:

def gen_img_model(input_dim=(75,75,3)):
    input = Input(shape=input_dim)

    conv = Conv2D(32, kernel_size=(3,3), activation='relu')(input)
    conv = MaxPooling2D((3,3))(conv)
    conv = Dropout(0.2)(conv)

    conv = BatchNormalization()(conv)


    dense = Dense(128, activation='relu', name='img_features')(conv)
    dense = Dropout(0.2)(dense)

    output = Dense(1, activation='sigmoid')(dense)

    optimizer = RMSprop(lr=1e-3, decay=0.0)

    model = Model(input, output)
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])

    return model

這是CNN的訓練方式:

checkpoint_name = './keras_img_checkpoint/img_model'
callbacks = [ModelCheckpoint(checkpoint_name, save_best_only=True)]

img_model = gen_img_model((75,75,3))

# batch size for img model
batch_size = 200

train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

val_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)


# train gen for img model
train_generator = train_datagen.flow_from_directory(
        './dataset/train/',
        target_size=(75, 75),
        batch_size=batch_size,
        class_mode='binary')

val_generator = val_datagen.flow_from_directory(
        './dataset/val/', 
        target_size=(75, 75),
        batch_size=batch_size,
        class_mode='binary')


STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VAL = val_generator.n // val_generator.batch_size

img_model.fit_generator(
        train_generator,
        steps_per_epoch=STEP_SIZE_TRAIN,
        validation_data=val_generator,
        validation_steps=800 // batch_size,
        epochs=1,
        verbose=1,
        callbacks=callbacks
)

將LSTM和CNN模型連接在一起的最佳方法是什么?

您可以使用Keras在一個模型中添加CNN和LSTM層。 您可能會遇到形狀問題。

例:

def CNN_LSTM():
    model = Sequential()
    model.add(Convolution2D(input_shape = , filters = , kernel_size = 
    , activation = )
    model.add(LSTM(units = , )

   return model

您只需要添加參數即可。 希望這可以幫助。

This is how you can merge two Deep learning models.

    model1 = Sequential()
    #input 
    model1.add(Dense(32, input_shape=(NUM_FEAT1,1)))
    model1.add(Activation("elu"))
    model1.add(Dropout(0.5))
    model1.add(Dense(16))
    model1.add(Activation("elu"))
    model1.add(Dropout(0.25))
    model1.add(Flatten())

    model2 = Sequential()
    #input 
    model2.add(Dense(32, input_shape=(NUM_FEAT1,1)))
    model2.add(Activation("elu"))
    model2.add(Dropout(0.5))
    model2.add(Dense(16))
    model2.add(Activation("elu"))
    model2.add(Dropout(0.25))
    model2.add(Flatten())

merged = Concatenate()([model1.output,model2.output])
z = Dense(128, activation="relu")(merged)
z = Dropout(0.25)(z)
z = Dense(1024, activation="relu")(z)
z = Dense(1, activation="sigmoid")(z)

model = Model(inputs=[model1.input, model2.input], outputs=z)

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])


        model.fit([x_train[train_index][:,:66], x_train[train_index][:,66:132], y_train[train_index], batch_size=100, epochs=100, verbose=2)

這樣,您就可以根據需要向模型中提供2種不同類型的數據,例如第一個模型中的圖像和第二個模型中的文本數據。

我認為這不能完全回答您的問題,但是您可以考慮在數據集上運行數十個ML模型,然后查看哪種模型效果最好,而不是僅僅這樣做。 您可以將AoutML或DataRobot用於這些任務。

https://heartbeat.fritz.ai/automl-the-next-wave-of-machine-learning-5494baac615f

https://www.forbes.com/sites/janakirammsv/2018/06/04/datarobot-puts-the-power-of-machine-learning-in-the-hands-of-business-analysts/#5e9586ea4306

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM