简体   繁体   English

从 TensorFlow 2.x 中的特定层移除层/提取特征

[英]Removing layers/extracting features from specific layers in TensorFlow 2.x

I have a trained CNN model, where I applied a dense layer as the top model to make predictions.我有一个训练有素的 CNN 模型,我在其中应用了一个密集层作为顶层模型来进行预测。 However, now I want to use the second to last layer to extract features, but I am unable to remove the last layer.但是,现在我想使用倒数第二层来提取特征,但我无法删除最后一层。

I have tried .pop but that does not seem to work anymore.我试过 .pop 但这似乎不再起作用了。

my model is the following:我的模型如下:

input_1 (InputLayer) [(None, 256, 256, 3)] 0 input_1 (InputLayer) [(None, 256, 256, 3)] 0
_________________________________________________________________ efficientnet-b6 (Functional) (None, 8, 8, 2304) 40960136 _________________________________________________________________高效net-b6(功能)(无、8、8、2304)40960136
_________________________________________________________________ global_average_pooling2d (Gl (None, 2304) 0 _________________________________________________________________ global_average_pooling2d (Gl (None, 2304) 0
_________________________________________________________________ dense (Dense) (None, 1) 2305 _________________________________________________________________ 密集 (Dense) (None, 1) 2305
================================================================= Total params: 40,962,441 Trainable params: 0 Non-trainable params: 40,962,441 ================================================== ============== 总参数:40,962,441 可训练参数:0 不可训练参数:40,962,441

and I want to remove the dense part.我想删除密集的部分。

You could take the following steps:您可以采取以下步骤:

  1. extract the layers of the model提取模型的层
  2. get both the input layer and the desired new output layer (in your case the features one you desire)获取输入层和所需的新输出层(在您的情况下是您想要的功能)
  3. rebuild the model重建模型

An example that emphasizes it is:一个强调它的例子是:

import tensorflow as tf
from tensorflow.keras import layers, models


def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
    in_x = layers.Input(shape=(in_size,))
    hidden_x = models.Sequential(name="hidden_layers")
    for i, num_h in enumerate(hidden_sizes):
        hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
        hidden_x.add(layers.Activation('relu'))
        hidden_x.add(layers.Dropout(dropout_prob))
    out_x1 = layers.Dense(num_classes, activation='softmax', name='baseline1')
    out_x2 = layers.Dense(3, activation='softmax', name='baseline2')
    return models.Model(inputs=in_x, outputs=out_x2(out_x1((hidden_x(in_x)))))

baseline_mdl = simpleMLP(28*28, [500, 300], 10)
print(baseline_mdl.summary())

Model: "functional_1" _________________________________________________________________ Layer (type) Output Shape Param #模型:“functional_1”___________________________________________________________________ 层(类型)输出形状参数#
================================================================= input_1 (InputLayer) [(None, 784)] 0 ================================================== ============== input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________ hidden_layers (Sequential) (None, 300) 542800 _________________________________________________________________ hidden_​​layers(顺序)(无,300)542800
_________________________________________________________________ baseline1 (Dense) (None, 10) 3010 _________________________________________________________________ 基线 1(密集)(无,10)3010
_________________________________________________________________ baseline2 (Dense) (None, 3) 33 _________________________________________________________________ 基线 2(密集)(无,3)33
================================================================= Total params: 545,843 Trainable params: 545,843 Non-trainable params: 0 _________________________________________________________________ ================================================== ============== 总参数:545,843 可训练参数:545,843 不可训练参数:0 _________________________________________________________________

baseline_in = baseline_mdl.layers[0].input
baseline_out = baseline_mdl.layers[-2].output
new_baseline = models.Model(inputs=baseline_in,
                            outputs=baseline_out)
print(new_baseline.summary())

Model: "functional_3" _________________________________________________________________ Layer (type) Output Shape Param #模型:“functional_3”___________________________________________________________________ 层(类型)输出形状参数#
================================================================= input_1 (InputLayer) [(None, 784)] 0 ================================================== ============== input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________ hidden_layers (Sequential) (None, 300) 542800 _________________________________________________________________ hidden_​​layers(顺序)(无,300)542800
_________________________________________________________________ baseline1 (Dense) (None, 10) 3010 _________________________________________________________________ 基线 1(密集)(无,10)3010
================================================================= Total params: 545,810 Trainable params: 545,810 Non-trainable params: 0 ================================================== ============== 总参数:545,810 可训练参数:545,810 不可训练参数:0


As you can see I removed the last layer and still able to use the trained weights.如您所见,我删除了最后一层,但仍然能够使用经过训练的权重。

Note that it might be slightly different depending on your model, but this is the general principles you should follow and adjust.请注意,根据您的型号,它可能略有不同,但这是您应该遵循和调整的一般原则。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM