简体   繁体   English

如何将 Lenet model h5 转换为 .tflite

[英]how to convert Lenet model h5 to .tflite

How do I correctly convert Lenet Model (input 32x32, 5 layers, 10 classes) to Tensorflow Lite?如何正确地将 Lenet Model(输入 32x32、5 层、10 类)转换为 Tensorflow Lite? I used this lines of codes but it gives me really bad confidences in android, like this image .我使用了这行代码,但它给了我对 android 的非常糟糕的信心,就像这张图片一样 The confidences are all around 0.1, or 10%.置信度都在 0.1 或 10% 左右。

This is the code I used这是我使用的代码

model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.post_training_quantize = True
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

The.h5 file above can predict images with good confidences and accuracy, like this image .上面的.h5 文件可以预测具有良好置信度和准确度的图像,例如这张图像 Or should I ask, does Tensorflow Lite not support custom model (Lenet)?或者我应该问,Tensorflow Lite 不支持自定义 model(Lenet)吗? why is tflite file does so much worse than.h5?为什么 tflite 文件比.h5 差这么多?

If.tflite file is generated with no mistakes it doesn't matter if the model is called Lenet or anything else.如果 .tflite 文件生成时没有错误,则 model 是否称为 Lenet 或其他任何名称都没有关系。 Also quantization will have a small decrease in accuracy but no major difference like you are stating.此外,量化的准确性会略有下降,但没有像您所说的那样有重大差异。 I would see how u are making bytebuffer to insert it inside interpreter.我会看看你是如何制作字节缓冲区以将其插入解释器的。 If u are using gray scale images u have to divide with 3/255... for colored images is only /255.如果您使用的是灰度图像,则必须除以 3/255...对于彩色图像只有 /255。 If during your training u haven't used pixel normalization then do not use /255 during bitmap to bytebuffer.如果您在训练期间没有使用像素归一化,则在 bitmap 到字节缓冲区期间不要使用 /255。 So your code would be like:所以你的代码会是这样的:

private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(ModelConfig.MODEL_INPUT_SIZE);
    byteBuffer.order(ByteOrder.nativeOrder());
    int[] pixels = new int[ModelConfig.INPUT_WIDTH * ModelConfig.INPUT_HEIGHT];
    bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
    for (int pixel : pixels) {
        float rChannel = (pixel >> 16) & 0xFF;
        float gChannel = (pixel >> 8) & 0xFF;
        float bChannel = (pixel) & 0xFF;
        float pixelValue = (rChannel + gChannel + bChannel);
        byteBuffer.putFloat(pixelValue);
    }
    return byteBuffer;
}

and not:并不是:

private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(ModelConfig.MODEL_INPUT_SIZE);
    byteBuffer.order(ByteOrder.nativeOrder());
    int[] pixels = new int[ModelConfig.INPUT_WIDTH * ModelConfig.INPUT_HEIGHT];
    bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
    for (int pixel : pixels) {
        float rChannel = (pixel >> 16) & 0xFF;
        float gChannel = (pixel >> 8) & 0xFF;
        float bChannel = (pixel) & 0xFF;
        float pixelValue = (rChannel + gChannel + bChannel) / 255.f;
        byteBuffer.putFloat(pixelValue);
    }
    return byteBuffer;
}

Its because of quantization.这是因为量化。
It reduces the size of the model so does the accuracy.它减小了 model 的尺寸,从而降低了精度。 Try not to quantizatise the model.尽量不要量化 model。
Try this.尝试这个。

import tensorflow as tf

model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

It might increase the size the tflite model but won't degrade the accuracy to that extent.它可能会增加 tflite model 的大小,但不会降低精度到那个程度。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM