[英]Float 16 quantized Tflite model not working for custom models?
I have a custom tensorflow model which I converted into tflite using the Float16 quantization as mentioned here .我有一个自定义 tensorflow model ,我使用此处提到的 Float16 量化将其转换为 tflite 。
But the the input details of the tflite model using the tflite interpreter are但是使用 tflite 解释器的 tflite model 的输入细节是
[{'name': 'input_1',
'index': 0,
'shape': array([ 1, 256, 256, 3], dtype=int32),
'shape_signature': array([ -1, 256, 256, 3], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
while the output details are而 output 的详细信息是
[{'name': 'Identity',
'index': 636,
'shape': array([ 7, 1, 256, 256, 1], dtype=int32),
'shape_signature': array([ 7, -1, 256, 256, 1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
Is something wrong with the conversion?转换有问题吗?
I also received this warning while converting the tf model to tflite在将 tf model 转换为 tflite 时,我也收到了此警告
WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.
WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.
PS I also tried doing this quantization, but received the same input/output details for that tflite model. PS我也尝试过进行这种量化,但收到了与 tflite model 相同的输入/输出细节。
Float16 and post-training quantization don not modify the input/output tensors but the intermediate weight tensors only. Float16 和训练后量化不修改输入/输出张量,而只修改中间权重张量。 The above behavior looks like an intended behavior.上述行为看起来像预期的行为。 If you want to fully quantize your model, try the full integer quantization.如果您想完全量化您的 model,请尝试完整的 integer 量化。 You can find the details at here .您可以在此处找到详细信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.