简体   繁体   English

如何解决运行时错误:在进行训练后量化(来自saved_model 的完全量化的tflite 模型)时张量Cast 的空最小值/最大值?

[英]How to solve Runtime Error: Empty min/max for tensor Cast while doing post-training quantization (fully quantized tflite model from saved_model)?

I try to create fully quantized tflite model to be able to run it on coral.我尝试创建完全量化的 tflite 模型,以便能够在珊瑚上运行它。 I downloaded SSD MobileNet V2 FPNLite 640x640 from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md我从https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md下载了 SSD MobileNet V2 FPNLite 640x640

I installed in virtual environment tf-nightly-2.5.0.dev20201123 tf-nightly-models and tensorflow/object_detection_0.1我安装在虚拟环境 tf-nightly-2.5.0.dev20201123 tf-nightly-models 和 tensorflow/object_detection_0.1

I run this code to do post training quantization我运行此代码来进行训练后量化

import tensorflow as tf
import cv2
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('./0-ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/',signature_keys=['serving_default']) # path to the SavedModel directory

VIDEO_PATH = '/home/andrej/Videos/outvideo3.h264'
def rep_data_gen():
    REP_DATA_SIZE = 10#00
    a = []
    video = cv2.VideoCapture(VIDEO_PATH)
    i=0
    while(video.isOpened()): 
        ret, img = video.read()
        i=i+1
        if not ret or i > REP_DATA_SIZE:
          print('Reached the end of the video!')
          break
        img = cv2.resize(img, (640, 640))#todo parametrize based on network size
        img = img.astype(np.uint8)
        #img = (img /127.5) -1 #
        #img = img.astype(np.float32)#causing types mismatch error
        a.append(img)
    a = np.array(a)
    print(a.shape) # a is np array of 160 3D images
    for i in tf.data.Dataset.from_tensor_slices(a).batch(1).take(REP_DATA_SIZE):
        yield [i]

#tf2 models
converter.allow_custom_ops=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = rep_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
#converter.quantized_input_stats = {'inputs': (0, 255)} #does not help
converter.inference_input_type = tf.uint8  # or tf.uint8
converter.inference_output_type = tf.uint8  # or tf.uint8
quantized_model = converter.convert()

# Save the model.
with open('quantized_model.tflite', 'wb') as f:
  f.write(quantized_model)

I got我有

RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor Cast
Empty min/max for tensor Cast

I trained the same model, SSD MobileNet V2 FPNLite 640x640 , using the script model_main_tf2.py and then exported the checkpoint to saved_model using the script exporter_main_v2.py .我使用脚本model_main_tf2.py训练了相同的模型SSD MobileNet V2 FPNLite 640x640 ,然后使用脚本exporter_main_v2.py将检查点导出到saved_model When trying to convert to ".tflite" for use on Edge TPU I was having the same problem.在尝试转换为“.tflite”以在 Edge TPU 上使用时,我遇到了同样的问题。

The solution for me was to export the trained model using the script export_tflite_graph_tf2.py instead of exporter_main_v2.py to generate the saved_model.pb .我的解决方案是使用脚本export_tflite_graph_tf2.py而不是exporter_main_v2.py来导出训练export_tflite_graph_tf2.py模型来生成saved_model.pb Then the conversion occurred well.然后转换发生得很好。

Maybe try to generate a saved_model using export_tflite_graph_tf2.py .也许尝试使用export_tflite_graph_tf2.py生成一个export_tflite_graph_tf2.py

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM