简体   繁体   English

op 尚不支持量化:tensorflow 2.x 的“DEQUANTIZE”

[英]Quantization not yet supported for op: 'DEQUANTIZE' for tensorflow 2.x

I am conducting QAT by keras on a resnet model and I got this problem while converting to tflite full integer model.我正在通过 keras 在 resnet 模型上进行 QAT,在转换为 tflite 全整数模型时遇到了这个问题。 I have tried the newest version tf-nightly, but it does not solve the problem.我已经尝试了最新版本的 tf-nightly,但它并没有解决问题。 I use quantization annotated model for Batch Normalization quantization during QAT我在 QAT 期间使用量化注释模型进行批量归一化量化

注释模型

Here is the code I use to convert my model:这是我用来转换模型的代码:

converter = tf.lite.TFLiteConverter.from_keras_model(layer)
def representative_dataset_gen():
    for _ in range(50):
        batch = next(train_generator)
        img = np.expand_dims(batch[0][0],0).astype(np.float32)
        yield [img]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS_INT8
]
converter.experimental_new_converter = True

# converter.target_spec.supported_types = [tf.int8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
quantized_tflite_model = converter.convert()
with open("test_try_v2.tflite", 'wb') as f:
    f.write(quantized_tflite_model)

if I bypass this error by adding tf.lite.OpsSet.TFLITE_BUILTINS to "target_spec.supported_ops", I still got this DEQUANTIZE problem at edge_tpu compiler如果我通过将tf.lite.OpsSet.TFLITE_BUILTINS添加到“target_spec.supported_ops”来绕过这个错误,我仍然在 edge_tpu 编译器中遇到了这个 DEQUANTIZE 问题

ERROR: :61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.
ERROR: Node number 3 (DEQUANTIZE) failed to prepare.

ERROR: :61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.
ERROR: Node number 3 (DEQUANTIZE) failed to prepare.

The reason is DEQUANTIZE of is not yet supported in tf before tf2.4 for fully 8-bit integers inference.原因是在 tf2.4 之前的 tf 中尚不支持 DEQUANTIZE of 进行完全 8 位整数推理。 Therefore, the solution is to go back to tf.1x or using tf2.4 instead因此,解决方案是回到 tf.1x 或使用 tf2.4 代替

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM