简体   繁体   English

TensorFlow Lite:toco_convert 用于任意大小的输入张量

[英]TensorFlow Lite: toco_convert for arbitrary sized input tensor

Looking at converting my TensorFlow model to the Flatbuf format ( .tflite ).考虑将我的 TensorFlow 模型转换为 Flatbuf 格式 ( .tflite )。

However, my model allows input of arbitrary size, ie you can classify one item, or N items at once.但是,我的模型允许输入任意大小,即您可以一次分类一项或 N 项。 When I try to convert, it throws an error since one of my input/output devices is of type NoneType .当我尝试转换时,它会引发错误,因为我的输入/输出设备之一是NoneType类型。

Think of something like theTensorFlow MNIST tutorial , where in the computation graph, our input x is of shape [None, 784] .想想TensorFlow MNIST 教程之类的东西,在计算图中,我们的输入x的形状为[None, 784]

From the tflite dev guide , you can convert your model to FlatBuf like so:tflite dev guide 中,您可以像这样将模型转换为 FlatBuf:

import tensorflow as tf

img = tf.placeholder(name="img", dtype=tf.float32, shape=(1, 64, 64, 3))
val = img + tf.constant([1., 2., 3.]) + tf.constant([1., 4., 4.])
out = tf.identity(val, name="out")

with tf.Session() as sess:
  tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
  open("converteds_model.tflite", "wb").write(tflite_model)

However, this does not work for me.但是,这对我不起作用。 A MWE could be: MWE 可以是:

import tensorflow as tf

img = tf.placeholder(name="inputs", dtype=tf.float32, shape=(None, 784))
out = tf.identity(inputs, name="out")


with tf.Session() as sess:
  tflite_model = tf.contrib.lite.toco_convert(sess.graph_def, [img], [out])
  open("converteds_model.tflite", "wb").write(tflite_model)

Error: TypeError: __int__ returned non-int (type NoneType)错误: TypeError: __int__ returned non-int (type NoneType)错误: TypeError: __int__ returned non-int (type NoneType)

Looking at the tf.contrib.lite.toco_convert docs, we have "input_tensors: List of input tensors. Type and shape are computed using foo.get_shape() and foo.dtype.".查看tf.contrib.lite.toco_convert文档,我们有“input_tensors:输入张量列表。使用 foo.get_shape() 和 foo.dtype 计算类型和形状。”。 So that's where our failure likely is.所以这就是我们失败的地方。 But I'm not sure if there's an argument I should be using or something that would allow me to export a model like this但我不确定是否有我应该使用的参数或允许我导出这样的模型的东西

This problem is already resolved in the newest converter code.这个问题已经在最新的转换器代码中解决了。 You can pass an input tensor where the 1st dimension is None (the 1st dimension is usually batch), and the converter will handle it correctly.您可以传递一个输入张量,其中第一个维度为None (第一个维度通常是批处理),转换器将正确处理它。

BTW, before invoking the interpreter, you can call interpreter.resize_tensor_input to change the batch size.顺便说一句,在调用解释器之前,您可以调用interpreter.resize_tensor_input来更改批大小。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在Ubuntu 16.04上使用bazel从源构建张量流。 错误是--->规则'// tensorflow / contrib / lite / toco:toco'的链接失败(出口1) - Building tensor flow from source using bazel on Ubuntu 16.04. Error is ---> Linking of rule '//tensorflow/contrib/lite/toco:toco' failed (Exit 1) 在运行toco时尝试将TensorFlow模型转换为TensorFlow lite --help给我一个错误 - Trying to convert TensorFlow model to TensorFlow lite, when running toco --help gives me an error 如何获取TOCO tf_convert的冻结Tensorflow模型的input_shape - How do i get the input_shape of a frozen Tensorflow-Model for TOCO tf_convert Python tensorflow lite 错误:无法设置张量:得到类型 1 的张量,但输入 88 的预期类型为 3 - Python tensorflow lite error:Cannot set tensor: Got tensor of type 1 but expected type 3 for input 88 Tensorflow [Toco]将模型转换为优化格式导致ValueError - Tensorflow [Toco] convert Model to optimized format cause ValueError 将 Tensorflow 模型转换为 Tensorflow Lite - Convert Tensorflow model into Tensorflow Lite 具有标准Tensorflow的Tensor Flow Lite模型 - Tensor Flow Lite Model with Standard Tensorflow Tensorflow TOCO python API - Tensorflow TOCO python API 如何在 Android Studio 中将 OpenCV Mat 输入帧转换为 Tensorflow 张量? - How to convert OpenCV Mat input frame to Tensorflow tensor in Android Studio? 将张量输入张量流图 - Input Tensor into tensorflow graph
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM