简体   繁体   中英

Removing layers/extracting features from specific layers in TensorFlow 2.x

I have a trained CNN model, where I applied a dense layer as the top model to make predictions. However, now I want to use the second to last layer to extract features, but I am unable to remove the last layer.

I have tried .pop but that does not seem to work anymore.

my model is the following:

input_1 (InputLayer) [(None, 256, 256, 3)] 0
_________________________________________________________________ efficientnet-b6 (Functional) (None, 8, 8, 2304) 40960136
_________________________________________________________________ global_average_pooling2d (Gl (None, 2304) 0
_________________________________________________________________ dense (Dense) (None, 1) 2305
================================================================= Total params: 40,962,441 Trainable params: 0 Non-trainable params: 40,962,441

and I want to remove the dense part.

You could take the following steps:

  1. extract the layers of the model
  2. get both the input layer and the desired new output layer (in your case the features one you desire)
  3. rebuild the model

An example that emphasizes it is:

import tensorflow as tf
from tensorflow.keras import layers, models


def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
    in_x = layers.Input(shape=(in_size,))
    hidden_x = models.Sequential(name="hidden_layers")
    for i, num_h in enumerate(hidden_sizes):
        hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
        hidden_x.add(layers.Activation('relu'))
        hidden_x.add(layers.Dropout(dropout_prob))
    out_x1 = layers.Dense(num_classes, activation='softmax', name='baseline1')
    out_x2 = layers.Dense(3, activation='softmax', name='baseline2')
    return models.Model(inputs=in_x, outputs=out_x2(out_x1((hidden_x(in_x)))))

baseline_mdl = simpleMLP(28*28, [500, 300], 10)
print(baseline_mdl.summary())

Model: "functional_1" _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________ hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________ baseline1 (Dense) (None, 10) 3010
_________________________________________________________________ baseline2 (Dense) (None, 3) 33
================================================================= Total params: 545,843 Trainable params: 545,843 Non-trainable params: 0 _________________________________________________________________

baseline_in = baseline_mdl.layers[0].input
baseline_out = baseline_mdl.layers[-2].output
new_baseline = models.Model(inputs=baseline_in,
                            outputs=baseline_out)
print(new_baseline.summary())

Model: "functional_3" _________________________________________________________________ Layer (type) Output Shape Param #
================================================================= input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________ hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________ baseline1 (Dense) (None, 10) 3010
================================================================= Total params: 545,810 Trainable params: 545,810 Non-trainable params: 0


As you can see I removed the last layer and still able to use the trained weights.

Note that it might be slightly different depending on your model, but this is the general principles you should follow and adjust.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM