简体   繁体   中英

how to convert Lenet model h5 to .tflite

How do I correctly convert Lenet Model (input 32x32, 5 layers, 10 classes) to Tensorflow Lite? I used this lines of codes but it gives me really bad confidences in android, like this image . The confidences are all around 0.1, or 10%.

This is the code I used

model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.post_training_quantize = True
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

The.h5 file above can predict images with good confidences and accuracy, like this image . Or should I ask, does Tensorflow Lite not support custom model (Lenet)? why is tflite file does so much worse than.h5?

If.tflite file is generated with no mistakes it doesn't matter if the model is called Lenet or anything else. Also quantization will have a small decrease in accuracy but no major difference like you are stating. I would see how u are making bytebuffer to insert it inside interpreter. If u are using gray scale images u have to divide with 3/255... for colored images is only /255. If during your training u haven't used pixel normalization then do not use /255 during bitmap to bytebuffer. So your code would be like:

private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(ModelConfig.MODEL_INPUT_SIZE);
    byteBuffer.order(ByteOrder.nativeOrder());
    int[] pixels = new int[ModelConfig.INPUT_WIDTH * ModelConfig.INPUT_HEIGHT];
    bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
    for (int pixel : pixels) {
        float rChannel = (pixel >> 16) & 0xFF;
        float gChannel = (pixel >> 8) & 0xFF;
        float bChannel = (pixel) & 0xFF;
        float pixelValue = (rChannel + gChannel + bChannel);
        byteBuffer.putFloat(pixelValue);
    }
    return byteBuffer;
}

and not:

private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
    ByteBuffer byteBuffer = ByteBuffer.allocateDirect(ModelConfig.MODEL_INPUT_SIZE);
    byteBuffer.order(ByteOrder.nativeOrder());
    int[] pixels = new int[ModelConfig.INPUT_WIDTH * ModelConfig.INPUT_HEIGHT];
    bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
    for (int pixel : pixels) {
        float rChannel = (pixel >> 16) & 0xFF;
        float gChannel = (pixel >> 8) & 0xFF;
        float bChannel = (pixel) & 0xFF;
        float pixelValue = (rChannel + gChannel + bChannel) / 255.f;
        byteBuffer.putFloat(pixelValue);
    }
    return byteBuffer;
}

Its because of quantization.
It reduces the size of the model so does the accuracy. Try not to quantizatise the model.
Try this.

import tensorflow as tf

model = tf.keras.models.load_model('model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

It might increase the size the tflite model but won't degrade the accuracy to that extent.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM