简体   繁体   中英

ValueError: Unknown layer: Functional

I made a CNN in colab and saved the models at every epoch. I exported the h5 file and now am trying to run the model on some test images. Here's the main error:

ValueError: Unknown layer: Functional

Here's the code I used to run the model and save at each epoch:

epochs = 50

callbacks = [
    tf.keras.callbacks.TensorBoard(log_dir='./logs'),
    keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
]
model.compile(
    optimizer=keras.optimizers.Adam(1e-3),
    loss="binary_crossentropy",
    metrics=["accuracy"],
)
model.fit(
    train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds,
)

After the model ran I just downloaded the h5 file from the colab sidebar locally. I re-uploaded the file from the local disk, and here's how I'm trying to load the model:

# load and evaluate a saved model
from tensorflow.keras.models import load_model

# load model#
loaded_model = load_model('save_at_47.h5')
loaded_model.layers[0].input_shape

Here's the full traceback:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-4-6af7396280fa> in <module>()
      3 
      4 # load model#
----> 5 loaded_model = load_model('save_at_47.h5')
      6 loaded_model.layers[0].input_shape

5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
    182     if (h5py is not None and (
    183         isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
--> 184       return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
    185 
    186     if sys.version_info >= (3, 4) and isinstance(filepath, pathlib.Path):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile)
    176     model_config = json.loads(model_config.decode('utf-8'))
    177     model = model_config_lib.model_from_config(model_config,
--> 178                                                custom_objects=custom_objects)
    179 
    180     # set weights

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects)
     53                     '`Sequential.from_config(config)`?')
     54   from tensorflow.python.keras.layers import deserialize  # pylint: disable=g-import-not-at-top
---> 55   return deserialize(config, custom_objects=custom_objects)
     56 
     57 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects)
    107       module_objects=globs,
    108       custom_objects=custom_objects,
--> 109       printable_module_name='layer')

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    360     config = identifier
    361     (cls, cls_config) = class_and_config_for_serialized_keras_object(
--> 362         config, module_objects, custom_objects, printable_module_name)
    363 
    364     if hasattr(cls, 'from_config'):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name)
    319   cls = get_registered_object(class_name, custom_objects, module_objects)
    320   if cls is None:
--> 321     raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
    322 
    323   cls_config = config['config']

ValueError: Unknown layer: Functional

It seems there have been several similar questions here ,and here . Changing the import method hasn't helped yet, and trying to make some kind of custom object has not worked either.

The solution to this error is very simple, ex. the reason is that you have trained the model on version '2.3.0' of Tensorflow & '2.4.3' of Keras (On Colab or local). and now you are accessing the saved model(.h5) via another version of Keras & TensorFlow. It will give you the error. The solution is that re-trained model with upgraded versions or downgrades your TF&Keras to the same version as on which model is trained.

Rebuilt the network from scratch:

image_size = (212, 212)
batch_size = 32

data_augmentation = keras.Sequential(
    [
        layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
        layers.experimental.preprocessing.RandomRotation(0.8),
    ]
)


def make_model(input_shape, num_classes):
    inputs = keras.Input(shape=input_shape)
    # Image augmentation block
    x = data_augmentation(inputs)

    # Entry block
    x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
    x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
    x = layers.BatchNormalization()(x)
    x = layers.Activation("relu")(x)

    x = layers.Conv2D(64, 3, padding="same")(x)
    x = layers.BatchNormalization()(x)
    x = layers.Activation("relu")(x)

    previous_block_activation = x  # Set aside residual

    for size in [128, 256, 512, 728]:
        x = layers.Activation("relu")(x)
        x = layers.SeparableConv2D(size, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.Activation("relu")(x)
        x = layers.SeparableConv2D(size, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.MaxPooling2D(3, strides=2, padding="same")(x)

        # Project residual
        residual = layers.Conv2D(size, 1, strides=2, padding="same")(
            previous_block_activation
        )
        x = layers.add([x, residual])  # Add back residual
        previous_block_activation = x  # Set aside next residual

    x = layers.SeparableConv2D(1024, 3, padding="same")(x)
    x = layers.BatchNormalization()(x)
    x = layers.Activation("relu")(x)

    x = layers.GlobalAveragePooling2D()(x)
    if num_classes == 2:
        activation = "sigmoid"
        units = 1
    else:
        activation = "softmax"
        units = num_classes

    x = layers.Dropout(0.5)(x)
    outputs = layers.Dense(units, activation=activation)(x)
    return keras.Model(inputs, outputs)


model = make_model(input_shape=image_size + (3,), num_classes=2)
keras.utils.plot_model(model, show_shapes=False)

Loaded the weights:

model.load_weights('save_at_47.h5')

And ran a prediction on an image:

# Running inference on new data
img = keras.preprocessing.image.load_img(
    "le_image.jpg", target_size=image_size
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)  # Create batch axis

predictions = model.predict(img_array)
score = predictions[0]
print(
    "This image is %.2f percent negative and %.2f percent positive."
    % (100 * (1 - score), 100 * score)
)

I had the same issue when i was on tf 2.3.0, i downgraded to tf 2.2.0 and it worked

I faced the same problem when training model with tf 2.3 on colab and load them with tf 2.2 in my local machine. The solution is to upgrade TensorFlow with this command:

pip3 install --upgrade tensorflow

The discussion here saved me some trouble! I share a different solution for a similar situation where model serialization is done manually (not from the fit method) using a model structure (yaml) and h5 weights file. In this case simply editing the yaml file was sufficient to adapt the serialized model for an older tensorflow version.

In my case I generated the model like this

    model_config = { "class_name": "Model",
                     "config":model.get_config()
                     }

    with open(os.path.join(dump_model_dir, "model_config.yaml"), "w") as file:
        file.write(model.to_yaml())

    model.save_weights(os.path.join(dump_model_dir, "model_weights.hdf5"))

Doing this with TF2.2 and 2.3 reveals a minor change in the yaml file. The diff for the two versions for a model structure with only Conv2D layers is shown below. It simple to fix manually or with sed:

One needs to change class_name: Functional into class_name: Model and remove the config groups: 1 that is new in Conv2D and probably other Conv layers. For consistency I also changed the keras version at the end. Obviously this will work only if the groups parameter is left as default: 1 . A similar solution should apply if you don't use yaml but json files for the model.

*** model_config_tf22.yaml  2021-01-07 15:00:03.042791215 +0100
--- model_config_tf23.yaml  2021-01-07 14:59:56.426791386 +0100
***************
*** 1,5 ****
  backend: tensorflow
! class_name: Model
  config:
    input_layers:
    - - input_1
--- 1,5 ----
  backend: tensorflow
! class_name: Functional
  config:
    input_layers:
    - - input_1
***************
*** 34,39 ****
--- 34,40 ----
        - 1
        dtype: float32
        filters: 128
+       groups: 1
        kernel_constraint: null
        kernel_initializer:
          class_name: RandomUniform
***************
*** 343,346 ****
    - - Conv2D_5_37
      - 0
      - 0
! keras_version: 2.3.0-tf
--- 351,354 ----
    - - Conv2D_5_37
      - 0
      - 0
! keras_version: 2.4.0

The following trick can be useful if you are using older TF (in my case 2.1.0) and trying to unpack h5 file packed by newer TF (eg 2.4.1) and facing this error.

your_model = tf.keras.models.load_model(
    path_to_h5,
    custom_objects={'Functional':tf.keras.models.Model})

You should try to install and upgrade:

!pip install --upgrade tensorflow
!pip install keras_efficientnet_v2
!pip install efficientnet

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM