簡體   English   中英

如何使用 Siamese Networks 和功能 API 獲得 keras 中的中間層的結果?

[英]How to get the result of an intermediate layer in keras with Siamese Networks and Functional API?

我對連體網絡有以下網絡定義:

def build_siamese_model(inputShape, embeddingDim=48):
    # specify the inputs for the feature extractor network
    inputs = Input(inputShape)

    ## first set of CONV => RELU => RESID=> POOL => DROPOUT layers
    first_conv1 = Conv2D(32, (3, 3), padding="same")(inputs)
    first_batch_norm1=BatchNormalization()(first_conv1)
    first_act1= LeakyReLU()(first_batch_norm1)

    second_conv1 = Conv2D(32, (5, 5), padding="same")(inputs)
    second_batch_norm1=BatchNormalization()(second_conv1)
    second_act1= LeakyReLU()(second_batch_norm1)

    third_conv1 = Conv2D(32, (7, 7), padding="same")(inputs)
    third_batch_norm1=BatchNormalization()(third_conv1)
    third_act1= LeakyReLU()(third_batch_norm1)

    residual_block1= Add()([first_act1, second_act1, third_act1])
    pool1 = MaxPooling2D(pool_size=(2, 2))(residual_block1)
    dropout1 = Dropout(0.3)(pool1)

    #receiver Convolutional layer
    receiver1_conv = Conv2D(32, (3, 3), padding="same")(dropout1)
    receiver1_batch_norm=BatchNormalization()(receiver1_conv)
    act_receiver1=LeakyReLU()(receiver1_batch_norm)

    ## second set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
    first_conv2 = Conv2D(32, (3, 3), padding="same")(act_receiver1)
    first_batch_norm2=BatchNormalization()(first_conv2)
    first_act2= LeakyReLU()(first_batch_norm2)

    second_conv2 = Conv2D(32, (5, 5), padding="same")(act_receiver1)
    second_batch_norm2=BatchNormalization()(second_conv2)
    second_act2= LeakyReLU()(second_batch_norm2)

    third_conv2 = Conv2D(32, (7, 7), padding="same")(act_receiver1)
    third_batch_norm2=BatchNormalization()(third_conv2)
    third_act2= LeakyReLU()(third_batch_norm2)
    
    residual_block2= Add()([first_act2, second_act2, third_act2])
    pool2 = MaxPooling2D(pool_size=(2, 2))(residual_block2)
    dropout2 = Dropout(0.3)(pool2)
    
    #receiver Convolutional layer
    receiver2_conv = Conv2D(32, (3, 3), padding="same")(dropout2)
    receiver2_batch_norm=BatchNormalization()(receiver2_conv)
    act_receiver2=LeakyReLU()(receiver2_batch_norm)

    ## last set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
    first_conv3 = Conv2D(32, (3, 3), padding="same")(act_receiver2)
    first_batch_norm3=BatchNormalization()(first_conv3)
    first_act3= LeakyReLU()(first_batch_norm3)

    second_conv3 = Conv2D(32, (5, 5), padding="same")(act_receiver2)
    second_batch_norm3=BatchNormalization()(second_conv3)
    second_act3= LeakyReLU()(second_batch_norm3)

    third_conv3 = Conv2D(32, (7, 7), padding="same")(act_receiver2)
    third_batch_norm3=BatchNormalization()(third_conv3)
    third_act3= LeakyReLU()(third_batch_norm3)
        
    residual_block3= Add()([first_act3, second_act3, third_act3])
    pool3 = MaxPooling2D(pool_size=(2, 2))(residual_block3)
    dropout3 = Dropout(0.3)(pool3)
    
    #last receiver Convolutional layer
    receiver3_conv = Conv2D(32, (3, 3), padding="same")(dropout3)
    receiver3_batch_norm=BatchNormalization()(receiver3_conv)
    act_receiver3=LeakyReLU()(receiver3_batch_norm)

    # prepare the final outputs
    pooledOutput = GlobalAveragePooling2D()(act_receiver3)
    outputs = Dense(embeddingDim)(pooledOutput)
    # build the model
    model = Model(inputs, outputs)
    return(model)

但是,這部分連接到我的網絡的輸入和output作為功能API。 以下是我如何鏈接這些部分:

print("[INFO] building siamese network...")
imgA = Input(shape=config.IMG_SHAPE)
imgB = Input(shape=config.IMG_SHAPE)

featureExtractor = build_siamese_model(config.IMG_SHAPE)

featsA = featureExtractor(imgA)
featsB = featureExtractor(imgB)

distance = Lambda(utils.euclidean_distance)([featsA, featsB])

outputs = Dense(1, activation="sigmoid")(distance)
model = Model(inputs=[imgA, imgB], outputs=outputs)

但是,在編譯model時,這里是對model的總結:

在此處輸入圖像描述

因此,我在上面完成的網絡定義似乎只是網絡的一層。

那么,我想要什么?

我想加載 model 並提取特定層的 output。 特別想要功能 object 的最后一層的 output (outputs = Dense(48)(pooledOutput) 在上面的網絡定義中) 這將為我在 model 中測試的每對圖像提供 48 個特征向量。

我試圖檢查一些以前的帖子並做了以下事情:

print("Step 1: Loading Model")

model1=load_model("where/the/model/is/located", compile=False)

#I tried the output of the firstlayer, for example
model_with_intermediate_layers = Model(inputs=model1.input, outputs = model1.layers[0].output)

pred = model_with_intermediate_layers.predict([pair_1,pair_2], steps = 1) 
print(pred) 

問題是什么??

上面代碼的問題是它只能訪問 0、1、3 和 4 層。0 和 1 給出輸入形狀,第 3 層給出分數,第 4 層是空的。 **我想訪問中間層,尤其是特征提取器網絡的最后一層。 ** 我怎樣才能做到這一點?

考慮到(i)我的功能 object 是網絡的第二層; (ii) 我想要它的最后一層 output; (iii) 第二層 output 是第三層的輸入,我用下面的代碼解決了這個問題:

#I am getting layer's 3 input, which is the same as the second layer's output (last layer of my functional model)
model_intermediate = Model(inputs=model1.input, outputs = model1.layers[3].input)

#Here I get 2 48-d vectors.
pred_intermediate = model_intermediate.predict([pair_1,pair_2], steps = 1) # predict_generator is deprecated

pred_intermediate=np.array(pred_intermediate)

print(type(pred_intermediate))
print(pred_intermediate)
print(pred_intermediate.shape)
input()

這給了我我想要的

在此處輸入圖像描述

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM