简体   繁体   English

ValueError:不支持在另一个 tf.keras 模型中量化 tf.keras 模型

[英]ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported

I've just got started with Keras/Tensorflow and I am trying to retrain and quantize to int8 a MobileNetV2 but I am getting this error:我刚刚开始使用 Keras/Tensorflow,我正在尝试重新训练并量化为 int8 a MobileNetV2,但出现此错误:

ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported.

I was following this guide to get around the quantization steps, but I am not exactly sure what exactly I am doing different.我按照本指南来绕过量化步骤,但我不确定我到底在做什么不同。

IMG_SHAPE = (224, 224, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                                  include_top=False, 
                                                  weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
  base_model,
  tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
  tf.keras.layers.Dropout(0.5),
  tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(units=2, activation='softmax')
])

quantize_model = tfmot.quantization.keras.quantize_model
q_aware_model = quantize_model(model)

Stack Trace:堆栈跟踪:

ValueError                                Traceback (most recent call last)

<ipython-input-34-b724ad4872a5> in <module>()
      9 
     10 quantize_model = tfmot.quantization.keras.quantize_model
---> 11 q_aware_model = quantize_model(model)

4 frames

/usr/local/lib/python3.7/dist-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py in _add_quant_wrapper(layer)
    217     if isinstance(layer, tf.keras.Model):
    218       raise ValueError(
--> 219           'Quantizing a tf.keras Model inside another tf.keras Model is not supported.'
    220       )
    221 

In this case your base_model behaves as if it is a layer.在这种情况下,您的base_model的行为就像它是一个层一样。 In order to expand it, you need to use Functional API, rather than Sequential API:为了扩展它,你需要使用Functional API,而不是Sequential API:

IMG_SHAPE = (224, 224, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                                  include_top=False, 
                                                  weights='imagenet')
base_model.trainable = False
x = tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu')(base_model.output)
x = tf.keras.layers.Dropout(0.5)(x)
x = tf.keras.layers.MaxPool2D(pool_size=(2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units=2, activation='softmax')(x)

model = tf.keras.Model(base_model.input, x)
model.summary()

Notice that model summary shows all of the layers including the base_model's .请注意,模型摘要显示了包括base_model's在内的所有层。 Then you can apply:然后你可以申请:

quantize_model = tfmot.quantization.keras.quantize_model
q_aware_model = quantize_model(model)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM