简体   繁体   中英

Accessing intermediate tensors of a Keras Model that were not explicitly exposed as layers in TF 2.0

Is it possible to access pre-activation tensors in a Keras Model? For example, given this model:

import tensorflow as tf
image_ = tf.keras.Input(shape=[224, 224, 3], batch_size=1)
vgg19 = tf.keras.applications.VGG19(include_top=False, weights='imagenet', input_tensor=image_, input_shape=image_.shape[1:], pooling=None)

the usual way to access layers is:

intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=[vgg19.get_layer('block1_conv2').output])
intermediate_layer_model.summary()

This gives the ReLU outputs for a layer, while I would like the ReLU inputs. I tried doing this:

graph = tf.function(vgg19, [tf.TensorSpec.from_tensor(image_)]).get_concrete_function().graph
outputs = [graph.get_tensor_by_name(tname) for tname in [
    'vgg19/block4_conv3/BiasAdd:0',
    'vgg19/block4_conv4/BiasAdd:0',
    'vgg19/block5_conv1/BiasAdd:0'
]]
intermediate_layer_model = tf.keras.models.Model(inputs=image_, outputs=outputs)
intermediate_layer_model.summary()

but I get the error

ValueError: Unknown graph. Aborting.

The only workaround I've found is to edit the model file to manually expose the intermediates, turning every layer like this:

x = layers.Conv2D(256, (3, 3), activation="relu", padding="same", name="block3_conv1")(x)

into 2 layers where the 1st one can be accessed before activations:

x = layers.Conv2D(256, (3, 3), activation=None, padding="same", name="block3_conv1")(x)
x = layers.ReLU(name="block3_conv1_relu")(x)

Is there a way to acces pre-activation tensors in a Model without essentially editing Tensorflow 2 source code, or reverting to Tensorflow 1 which had full flexibility accessing intermediates?

To get output of each layer. You have to define a keras function and evaluate it for each layer.

Please refer the code as shown below

from tensorflow.keras import backend as K

inp = model.input                                           # input 
outputs = [layer.output for layer in model.layers]          # all layer outputs
functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions

For more details on this please refer SO Answer .

There is a way to access pre-activation layers for pretrained Keras models using TF version 2.7.0. Here's how to access two intermediate pre-activation outputs from VGG19.

Initialize 3 VGG19 models. The first one is only to compare results afterwards and for sanity checks. We can omit top layers to avoid loading unnecessary parameters into memory.

vgg19 = tf.keras.applications.VGG19(
    include_top=False,
    weights="imagenet"
)

vgg192 = tf.keras.applications.VGG19(
    include_top=False,
    weights="imagenet"
)

vgg193 = tf.keras.applications.VGG19(
    include_top=False,
    weights="imagenet"
)

This is the important part: Change the activation of the conv layers to linear (ie no activation).

vgg192.get_layer("block2_conv2").activation = tf.keras.activations.linear

vgg193.get_layer("block5_conv4").activation = tf.keras.activations.linear

This is also the reason why we need two separate VGG19 models when we like to have two intermediate outputs. When changing the activation of a lower layer, the outputs of the higher layers will be changed as well.

Finally, get the outputs and check if they equal post-activation outputs when we apply ReLU-activation.

inter = Model(vgg19.input, [vgg19.get_layer("block2_conv2").output, vgg19.get_layer("block5_conv4").output])
inter2 = Model(vgg192.input, [vgg192.get_layer("block2_conv2").output])
inter3 = Model(vgg193.input, [vgg193.get_layer("block5_conv4").output])

b2c2, b5c4 = inter(tf.keras.applications.vgg19.preprocess_input(img))
b2c2_preact = inter2(tf.keras.applications.vgg19.preprocess_input(img))
b5c4_preact = inter3(tf.keras.applications.vgg19.preprocess_input(img))

print(np.allclose(tf.keras.activations.relu(b2c2_preact).numpy(),b2c2.numpy()))
print(np.allclose(tf.keras.activations.relu(b5c4_preact).numpy(),b5c4.numpy()))
True
True

Here's a visualization similar to Fig. 6 of Wang et al. to see the effect in the feature space. VGG19-中级

Input image

输入图像

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM