简体   繁体   中英

How to do batching with TensorFlow Lite?

I have a custom CNN model, and I have converted it to .tflite format and deployed it on my Android app. However, I can't figure out how to do batching while inference with tensorflow lite.

From this Google doc , it seems you have to set the input format of your model. However, this doc is using a code example with Firebase API, which I'm not planning on using.

To be more specific:

I want to inference multiple 100x100x3 images at once, so the input size is N x100x100x3.

Question:

How to do this with TF lite?

You can just call the resizeInput API (Java) or ResizeInputTensor API (if you're using C++).

For example, in Java:

interpreter.resizeInput(tensor_index, [num_batch, 100, 100, 3]);

Let us know if you have any problem batching in TensorFlow lite.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM