简体   繁体   English

将 ONNX model 转换为 TensorFlow Lite

[英]Converting ONNX model to TensorFlow Lite

I've got some models for the ONNX Model Zoo .我有一些ONNX Model Zoo的模型。 I'd like to use models from here in a TensorFlow Lite (Android) application and I'm running into problems figuring out how to get the models converted.我想在 TensorFlow Lite (Android) 应用程序中使用此处的模型,但在弄清楚如何转换模型时遇到了问题。

From what I've read, the process I need to follow is to convert the ONNX model to a TensorFlow model, then convert that TensorFlow model to a TensorFlow Lite model. From what I've read, the process I need to follow is to convert the ONNX model to a TensorFlow model, then convert that TensorFlow model to a TensorFlow Lite model.

import onnx
from onnx_tf.backend import prepare
import tensorflow as tf

onnx_model = onnx.load('./some-model.onnx') 
tf_rep = prepare(onnx_model)
tf_rep.export_graph("some-model.pb") 

After the above executes, I have the file some-model.pb which I believe contains a TensorFlow Freeze Graph.执行上述操作后,我有文件 some-model.pb 我相信它包含 TensorFlow 冻结图。 From here I am not sure where to go.从这里我不确定 go 在哪里。 When I search I find a lot of answers that are for TensorFlow 1.x (which I only realize after the samples I find fail to execute).当我搜索时,我找到了很多针对 TensorFlow 1.x 的答案(我只有在我发现无法执行的示例之后才意识到)。 I'm trying to use TensorFlow 2.x.我正在尝试使用 TensorFlow 2.x。

If it matters, the specific model I'm starting off with is here .如果重要的话,我开始使用的特定 model 就在这里

Per the ReadMe.md, the shape of the input is (1x3x416x416) and the output shape is (1x125x13x13).根据 ReadMe.md,输入的形状为 (1x3x416x416),output 的形状为 (1x125x13x13)。

I got my anser.我得到了我的分析器。 I was able to use the code below to complete the conversion.我能够使用下面的代码来完成转换。

import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph('model.pb', #TensorFlow freezegraph 
                                                  input_arrays=['input.1'], # name of input
                                                  output_arrays=['218']  # name of output
                                                  )
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                                   tf.lite.OpsSet.SELECT_TF_OPS]      
# tell converter which type of optimization techniques to use
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM