简体   繁体   中英

YOLOv5 - Convert to tflite but make scores type float32 instead of int32

I am trying to use a custom object detection model trained with YOLOv5 converted to tflite for an Android app (using this exact TensorFlow example).

The model has been converted to tflite by using the YOLOv5 converter like this: python export.py --weights newmodel.pt --include tflite --int8 --agnostic-nms This is the export.py function that exports model as tflite: `def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')): # YOLOv5 TensorFlow Lite export import tensorflow as tf

LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
batch_size, ch, *imgsz = list(im.shape)  # BCHW
f = str(file).replace('.pt', '-fp16.tflite')

converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if int8:
    from models.tf import representative_dataset_gen
    dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)
    converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.target_spec.supported_types = []
    converter.inference_input_type = tf.uint8  # or tf.int8
    converter.inference_output_type = tf.uint8  # or tf.int8
    converter.experimental_new_quantizer = True
    f = str(file).replace('.pt', '-int8.tflite')
if nms or agnostic_nms:
    converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)

tflite_model = converter.convert()
open(f, "wb").write(tflite_model)
return f, None`

The working example uses these tensors: Working example model's tensors

My tensors look like this: My custom model's tensors

The problem is that I don't know how to convert my output tensor's SCORE type from int32 to float32 . Therefore, the app does not work with my custom model (I think this is the only problem that is stopping my custom model from working).

YoloV5 model is returning data in INT32 format. But TensorBuffer does not support data type: INT32 . To use On Device ML in Android, use SSD models . Because only SSD models are currently supported by tflite library.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM