简体   繁体   English

Float 16 量化 Tflite model 不适用于自定义模型?

[英]Float 16 quantized Tflite model not working for custom models?

I have a custom tensorflow model which I converted into tflite using the Float16 quantization as mentioned here .我有一个自定义 tensorflow model ,我使用此处提到的 Float16 量化将其转换为 tflite 。
But the the input details of the tflite model using the tflite interpreter are但是使用 tflite 解释器的 tflite model 的输入细节是

[{'name': 'input_1',
  'index': 0,
  'shape': array([  1, 256, 256,   3], dtype=int32),
  'shape_signature': array([ -1, 256, 256,   3], dtype=int32),
  'dtype': numpy.float32,
  'quantization': (0.0, 0),
  'quantization_parameters': {'scales': array([], dtype=float32),
   'zero_points': array([], dtype=int32),
   'quantized_dimension': 0},
  'sparsity_parameters': {}}]

while the output details are而 output 的详细信息是

[{'name': 'Identity',
  'index': 636,
  'shape': array([  7,   1, 256, 256,   1], dtype=int32),
  'shape_signature': array([  7,  -1, 256, 256,   1], dtype=int32),
  'dtype': numpy.float32,
  'quantization': (0.0, 0),
  'quantization_parameters': {'scales': array([], dtype=float32),
   'zero_points': array([], dtype=int32),
   'quantized_dimension': 0},
  'sparsity_parameters': {}}]

Is something wrong with the conversion?转换有问题吗?

I also received this warning while converting the tf model to tflite在将 tf model 转换为 tflite 时,我也收到了此警告

WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.
WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.

PS I also tried doing this quantization, but received the same input/output details for that tflite model. PS我也尝试过进行这种量化,但收到了与 tflite model 相同的输入/输出细节。

Float16 and post-training quantization don not modify the input/output tensors but the intermediate weight tensors only. Float16 和训练后量化不修改输入/输出张量,而只修改中间权重张量。 The above behavior looks like an intended behavior.上述行为看起来像预期的行为。 If you want to fully quantize your model, try the full integer quantization.如果您想完全量化您的 model,请尝试完整的 integer 量化。 You can find the details at here .您可以在此处找到详细信息。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 ONNX 量化 Model 类型错误:类型“张量(float16)” - ONNX Quantized Model Type Error: Type 'tensor(float16)' convert.pb model 为量化的 tflite model - convert .pb model into quantized tflite model TFLite:`ValueError:Model 输入未量化。` - TFLite: `ValueError: Model input is not quantized.` TFLite 量化 Model 仍然输出浮点数 - TFLite Quantized Model still outputs floats 量化的 TFLite 模型比 TF 模型具有更好的准确性 - Quantized TFLite model gives better accuracy than TF model Coral Edge TPU Compiler 无法转换 tflite 模型:模型未量化 - Coral Edge TPU Compiler cannot convert tflite model: Model not quantized 使用 Python 中的量化 tflite model “INT8” 运行推理 - Run inference with quantized tflite model “INT8” in Python 无法从tf.keras模型 - >量化的冻结图 - > .tflite与TOCO - Unable to go from tf.keras model -> quantized frozen graph -> .tflite with TOCO 如何从量化的 TFLite 中获取 class 索引? - How to get class indices from a quantized TFLite? 如何解决运行时错误:在进行训练后量化(来自saved_model 的完全量化的tflite 模型)时张量Cast 的空最小值/最大值? - How to solve Runtime Error: Empty min/max for tensor Cast while doing post-training quantization (fully quantized tflite model from saved_model)?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM