[英]Quantization-aware training in Tensorflow using the highlevel keras api
[英]Transfer Learning with Quantization Aware Training using Functional API
我有一个模型,我正在为 MobileNetV2 使用迁移学习,我想对其进行量化,并将精度差异与带有迁移学习的非量化模型进行比较。 但是,它们并不完全支持递归量化,但据此,这种方法应该量化我的模型: https ://github.com/tensorflow/model-optimization/issues/377#issuecomment-820948555
我尝试做的是:
import tensorflow as tf
import tensorflow_model_optimization as tfmot
pretrained_model = tf.keras.applications.MobileNetV2(include_top=False)
pretrained_model.trainable = True
for layer in pretrained_model.layers[:-1]:
layer.trainable = False
quantize_model_pretrained = tfmot.quantization.keras.quantize_model
q_pretrained_model = quantize_model_pretrained(pretrained_model)
original_inputs = tf.keras.layers.Input(shape=(224, 224, 3))
y = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(original_inputs)
y = base_model(original_inputs)
y = tf.keras.layers.GlobalAveragePooling2D()(y)
original_outputs = tf.keras.layers.Dense(5, activation="softmax")(y)
model_1 = tf.keras.Model(original_inputs, original_outputs)
quantize_model = tfmot.quantization.keras.quantize_model
q_aware_model = quantize_model(model_1)
它仍然给我以下错误:
ValueError: Quantizing a tf.keras Model inside another tf.keras Model is not supported.
我想了解在这种情况下执行量化感知训练的正确方法是什么?
根据您提到的问题,您应该分别量化每个模型,然后将它们组合在一起。 像这样的东西:
import tensorflow as tf
import tensorflow_model_optimization as tfmot
pretrained_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3), include_top=False)
pretrained_model.trainable = True
for layer in pretrained_model.layers[:-1]:
layer.trainable = False
q_pretrained_model = tfmot.quantization.keras.quantize_model(pretrained_model)
q_base_model = tfmot.quantization.keras.quantize_model(tf.keras.Sequential([tf.keras.layers.GlobalAveragePooling2D(input_shape=(7, 7, 1280)), tf.keras.layers.Dense(5, activation="softmax")]))
original_inputs = tf.keras.layers.Input(shape=(224, 224, 3))
y = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(original_inputs)
y = q_pretrained_model(original_inputs)
original_outputs = q_base_model(y)
model = tf.keras.Model(original_inputs, original_outputs)
它看起来不像已经被支持开箱即用,即使这是声称的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.