简体   繁体   中英

Upgraded to Tensorflow 2.5 now get a Lambda Layer error when using pretrained Keras Applications Models

I followed this tutorial to build a siamese network for my problem. I was using Tensorflow 2.4.1 and now upgraded

This code worked wonderfully before

base_cnn = resnet.ResNet50(
    weights="imagenet", input_shape=target_shape + (3,), include_top=False
)

flatten = layers.Flatten()(base_cnn.output)
dense1 = layers.Dense(512, activation="relu")(flatten)
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)

embedding = Model(base_cnn.input, output, name="Embedding")

trainable = False
for layer in base_cnn.layers:
    if layer.name == "conv5_block1_out":
        trainable = True
    layer.trainable = trainable

Now each resnet layer or mobilenet or efficient net (tried them all) throws these errors:

WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.nn.convolution_620), but
are not present in its tracked objects:
  <tf.Variable 'stem_conv/kernel:0' shape=(3, 3, 3, 48) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.

It compiles and seems to fit.

But do we have to initialize the models somewhat differently in 2.5?

Thanks for any pointers!

Here there is no need to revert back to TF2.4.1 . I would always recommend try with latest version because it addressed many of the performance issues and new features.

I was able to execute above code without any issues in TF2.5 .

import tensorflow as tf
print(tf.__version__)
from tensorflow.keras.applications import ResNet50
from tensorflow.keras import layers, Model


img_width, img_height = 224, 224
target_shape = (img_width, img_height, 3)


base_cnn = ResNet50(
    weights="imagenet", input_shape=target_shape, include_top=False
)

flatten = layers.Flatten()(base_cnn.output)
dense1 = layers.Dense(512, activation="relu")(flatten)
dense1 = layers.BatchNormalization()(dense1)
dense2 = layers.Dense(256, activation="relu")(dense1)
dense2 = layers.BatchNormalization()(dense2)
output = layers.Dense(256)(dense2)

embedding = Model(base_cnn.input, output, name="Embedding")

trainable = False
for layer in base_cnn.layers:
    if layer.name == "conv5_block1_out":
        trainable = True
    layer.trainable = trainable

Output:

2.5.0
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94773248/94765736 [==============================] - 1s 0us/step

As per @Olli, Restarting and clearing the session the kernel has resolved the problem.

I'm not sure what's the main reason for your issue as it's not reproducible generally. But here are some notes about that warning message. The traceback shown in your question is not from ResNet but from EfficientNet .

Now, we know that the Lambda layer exists so that arbitrary expressions can be used as a Layer when constructing Sequential and Functional API models. Lambda layers are best suited for simple operations or quick experimentation. While it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs . For example:

import tensorflow as tf 

x_input = tf.range(12.).numpy().reshape(-1, 4)
weights = tf.Variable(tf.random.normal((4, 2)), name='w')
bias = tf.ones((1, 2), name='b')

# lambda custom layer
mylayer1 = tf.keras.layers.Lambda(lambda x: tf.add(tf.matmul(x, weights),
                                                           bias), name='lambda1')
mylayer1(x_input)
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (lambda1), but
are not present in its tracked objects:
  <tf.Variable 'w:0' shape=(4, 2) dtype=float32, numpy=
array([[-0.753139  , -1.1668463 ],
       [-1.3709341 ,  0.8887151 ],
       [ 0.3157893 ,  0.01245957],
       [-1.3878908 , -0.38395467]], dtype=float32)>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
<tf.Tensor: shape=(3, 2), dtype=float32, numpy=
array([[ -3.903028 ,   0.7617702],
       [-16.687727 ,  -1.8367348],
       [-29.472424 ,  -4.43524  ]], dtype=float32)>

It's because the mylayer1 layer doesn't trace the tf.Variables directly and so that those parameter won't appear in mylayer1.trainable_weights .

mylayer1.trainable_weights
[]

In general, Lambda layers can be convenient for simple stateless computation , but anything more complex should use a subclass Layer instead. From your traceback, it seems like there can be such a possible scenario with the step_conv layer.

for layer in EfficientNetB0(weights=None).layers:
    if layer.name == 'stem_conv':
        print(layer)
<tensorflow.python.keras.layers.convolutional.Conv2D object.. 

Quick surveying on source code of tf.compat.v1.nn.conv2d , lead to a lambda expression that might be the cause.

pip install tensorflow==2.3.0, worked for me instead of tf 2.5 I was facing the issue related to using Lambda layer

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM