[英]TensorFlow Lite: toco_convert for arbitrary sized input tensor
Looking at converting my TensorFlow model to the Flatbuf format ( .tflite
).考虑将我的 TensorFlow 模型转换为 Flatbuf 格式 (
.tflite
)。
However, my model allows input of arbitrary size, ie you can classify one item, or N items at once.但是,我的模型允许输入任意大小,即您可以一次分类一项或 N 项。 When I try to convert, it throws an error since one of my input/output devices is of type
NoneType
.当我尝试转换时,它会引发错误,因为我的输入/输出设备之一是
NoneType
类型。
Think of something like theTensorFlow MNIST tutorial , where in the computation graph, our input x
is of shape [None, 784]
.想想TensorFlow MNIST 教程之类的东西,在计算图中,我们的输入
x
的形状为[None, 784]
。
From the tflite dev guide , you can convert your model to FlatBuf like so:从tflite dev guide 中,您可以像这样将模型转换为 FlatBuf:
import tensorflow as tf
img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
out = tf.identity(val, name="out")
with tf.Session() as sess:
tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
open("converteds_model.tflite", "wb").write(tflite_model)
However, this does not work for me.但是,这对我不起作用。 A MWE could be:
MWE 可以是:
import tensorflow as tf
img = tf.placeholder(name="inputs", dtype=tf.float32, shape=(None, 784))
out = tf.identity(inputs, name="out")
with tf.Session() as sess:
tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
open("converteds_model.tflite", "wb").write(tflite_model)
Error: TypeError: __int__ returned non-int (type NoneType)
错误:
TypeError: __int__ returned non-int (type NoneType)
错误: TypeError: __int__ returned non-int (type NoneType)
Looking at the tf.contrib.lite.toco_convert docs, we have "input_tensors: List of input tensors. Type and shape are computed using foo.get_shape() and foo.dtype.".查看tf.contrib.lite.toco_convert文档,我们有“input_tensors:输入张量列表。使用 foo.get_shape() 和 foo.dtype 计算类型和形状。”。 So that's where our failure likely is.
所以这就是我们失败的地方。 But I'm not sure if there's an argument I should be using or something that would allow me to export a model like this
但我不确定是否有我应该使用的参数或允许我导出这样的模型的东西
This problem is already resolved in the newest converter code.这个问题已经在最新的转换器代码中解决了。 You can pass an input tensor where the 1st dimension is
None
(the 1st dimension is usually batch), and the converter will handle it correctly.您可以传递一个输入张量,其中第一个维度为
None
(第一个维度通常是批处理),转换器将正确处理它。
BTW, before invoking the interpreter, you can call interpreter.resize_tensor_input
to change the batch size.顺便说一句,在调用解释器之前,您可以调用
interpreter.resize_tensor_input
来更改批大小。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.