The question is related to inferencing from a tflite model converted from standard keras-tensorflow Mobilenetv2 model.
tf version: 2.2.0
Why is this difference in inference pipeline? Can someone help with correct steps for both quantized and non-quantized (floating point model) tflite model for 0-1 normalisation based model?
Different models may have different preprocessing settings. If you're confident that the original model is trained with (0,1) preprocessing, just simply modify the android example code you found.
For quant models, if you noticed similar normalization step, change it accordingly. Sometimes the preprocessing of quant models is just nothing, because the author combines the normalization step and quantization step (it's possible that they're equivalent to a no-op if combined together).
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.