简体   繁体   中英

Correct pre-processing pipeline for inference from tensorflow lite model

The question is related to inferencing from a tflite model converted from standard keras-tensorflow Mobilenetv2 model.

tf version: 2.2.0

  1. The model has been trained using 0-1 normalization as provided in documentation/example: here
  2. After conversion to tflite(non-quantized/optimised version), android sample uses a preprocessing of (-1, 1) which can be found in: here in android documentation. Also here in python documentation.

Why is this difference in inference pipeline? Can someone help with correct steps for both quantized and non-quantized (floating point model) tflite model for 0-1 normalisation based model?

Different models may have different preprocessing settings. If you're confident that the original model is trained with (0,1) preprocessing, just simply modify the android example code you found.

https://github.com/tensorflow/examples/blob/40e3ac5b5c17ac75352b99747b8532272204365f/lite/codelabs/flower_classification/android/finish/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.java#L28

For quant models, if you noticed similar normalization step, change it accordingly. Sometimes the preprocessing of quant models is just nothing, because the author combines the normalization step and quantization step (it's possible that they're equivalent to a no-op if combined together).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM