[英]Retrained Tflite/Pb models extracted from the same checkpoint give different results
After I retrained pre-trained ssd mobilenet v1 model with my own image dataset using object_detection\\model_main.py script, I exported both .pb freeze graph (with export_inference_graph.py script) 使用object_detection \\ model_main.py脚本用自己的图像数据集对经过预训练的ssd mobilenet v1模型进行了培训之后,我导出了两个.pb冻结图(使用export_inference_graph.py脚本)
python models\research\object_detection\export_inference_graph.py
--input_type image_tensor
--input_shape=1,300,300,3
--pipeline_config_path ssd_mobilenet_v1_test.config
--trained_checkpoint_prefix training/model.ckpt
--output_directory export\freeze\
and .tflite graph (with export_tflite_ssd_graph.py script and tflite_convert). 和.tflite图(使用export_tflite_ssd_graph.py脚本和tflite_convert)。
python models\research\object_detection\export_tflite_ssd_graph.py
--input_type image_tensor
--pipeline_config_path ssd_mobilenet_v1_test.config
--trained_checkpoint_prefix training/model.ckpt
--output_directory export\tflite\
--max_detections 16
--add_postprocessing_op=true
tflite_convert
--output_file=export\tflite\model.tflite
--graph_def_file=export\tflite\tflite_graph.pb
--input_shapes=1,300,300,3
--input_arrays=normalized_input_image_tensor
--output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3
--inference_type=QUANTIZED_UINT8
--mean_values=128
--std_dev_values=128
--default_ranges_min=0
--default_ranges_max=6
--allow_custom_ops
Pb graph seems to work just fine, but the tflite one false detect everything on android, so I get 16 out of 16 possible detections whatever image I pass to it, even image filled with black colour (I test it on android device. It works well with pre-trained model). PB图似乎工作正常,但是tflite一个false可以检测到android上的所有内容,因此无论我传递给它的图像是什至是黑色的图像(我在android设备上对其进行测试,我都能从16种可能的检测中获得16种)。良好的预训练模型)。
Changing convert options like disabling/enabling quantizing, image std/mean didn't change anything. 更改转换选项,如禁用/启用量化,image std / mean并没有改变任何东西。 I also compared my tflite graph to example mobilenet graph and they look pretty similar.
我还将tflite图与示例mobilenet图进行了比较,它们看起来非常相似。 Any ideas what can cause that problem?
有什么想法会导致该问题吗?
(windows 10/cuda 9.0/cudnn 7.0/tf-nightly-gpu/models-master) (windows 10 / cuda 9.0 / cudnn 7.0 / tf-nightly-gpu / models-master)
The output tensors from the tflite model appear to return some extreme values (ex: 5e35 or -3e34). 来自tflite模型的输出张量似乎返回一些极值(例如:5e35或-3e34)。 Since some of these score values are greater than 1, it counts as a detection.
由于这些得分值中的一些大于1,因此它被视为检测。
My solution, replace all values greater than a limit (I did 1e5) with 0. (Python was faster.) 我的解决方案是,将大于限制的所有值(我执行1e5)都替换为0。(Python更快。)
tensor[tensor > 1e5] = 0
It is weird this doesn't happen with the example detector.tflite
or an exported frozen inference graph. 奇怪的是,在示例
detector.tflite
或导出的冻结推理图中都没有发生这种情况。 There must be a proper way for exporting tflite models. 必须有导出tflite模型的正确方法。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.