简体   繁体   中英

Tensorflow lite retrained model : Android application crashes after replacing my model

I am following this tensorflow light tutorial : https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#5

Android application provided here

Works fine when I install and run on android mobile. But when I replaced newly created flower model it always crashes. Here are the logs:

05-31 22:55:46.492 581-581/? I/art: Late-enabling -Xcheck:jni
05-31 22:55:47.484 581-581/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Created a Tensorflow Lite Image Classifier.
05-31 22:55:47.496 581-598/android.example.com.tflitecamerademo D/OpenGLRenderer: Use EGL_SWAP_BEHAVIOR_PRESERVED: true
05-31 22:55:47.657 581-598/android.example.com.tflitecamerademo I/Adreno-EGL: <qeglDrvAPI_eglInitialize:379>: EGL 1.4 QUALCOMM build:  (Ifd751822f5)
    OpenGL ES Shader Compiler Version: XE031.06.00.05
    Build Date: 01/26/16 Tue
    Local Branch: AU12_SBA
    Remote Branch: 
    Local Patches: 
    Reconstruct Branch: 
05-31 22:55:47.664 581-598/android.example.com.tflitecamerademo I/OpenGLRenderer: Initialized EGL, version 1.4
05-31 22:55:47.892 581-581/android.example.com.tflitecamerademo I/CameraManagerGlobal: Connecting to camera service
05-31 22:55:48.010 581-581/android.example.com.tflitecamerademo I/CameraManager: Using legacy camera HAL.
05-31 22:55:48.395 581-597/android.example.com.tflitecamerademo I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
05-31 22:55:48.395 581-648/android.example.com.tflitecamerademo I/RequestThread-0: Configure outputs: 1 surfaces configured.
05-31 22:55:48.395 581-648/android.example.com.tflitecamerademo D/Camera: app passed NULL surface
05-31 22:55:48.469 581-581/android.example.com.tflitecamerademo I/Choreographer: Skipped 35 frames!  The application may be doing too much work on its main thread.
05-31 22:55:48.555 581-597/android.example.com.tflitecamerademo I/CameraDeviceState: Legacy camera service transitioning to state IDLE
05-31 22:55:48.633 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to put values into ByteBuffer: 41
05-31 22:55:48.801 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to run model inference: 169
05-31 22:55:48.853 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to put values into ByteBuffer: 43
05-31 22:55:48.985 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to run model inference: 133
05-31 22:55:48.987 581-597/android.example.com.tflitecamerademo I/RequestQueue: Repeating capture request set.
05-31 22:55:48.993 581-648/android.example.com.tflitecamerademo W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
    Only received metering rectangles with weight 0.
    Only received metering rectangles with weight 0.
05-31 22:55:49.033 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to put values into ByteBuffer: 40
05-31 22:55:49.159 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to run model inference: 126
05-31 22:55:49.212 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to put values into ByteBuffer: 42
05-31 22:55:49.332 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to run model inference: 121
05-31 22:55:49.385 581-597/android.example.com.tflitecamerademo D/TfLiteCameraDemo: Timecost to put values into ByteBuffer: 46
05-31 22:55:49.545 581-597/android.example.com.tflitecamerademo A/libc: Fatal signal 7 (SIGBUS), code 1, fault addr 0xb946ac98 in tid 597 (CameraBackgroun)

It will be great if some can provide any input regarding this

Thanks for looking into issue and sorry about late reply. Here are the steps I followed : 1. Retrained the model using this tutorial : https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html?index=..%2F..%2Findex#3

Command to create model :

python -m scripts.retrain   
--bottleneck_dir=tf_files/bottlenecks   
--how_many_training_steps=500   
--model_dir=tf_files/models/   
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"   
--output_graph=tf_files/retrained_graph.pb   
--output_labels=tf_files/retrained_labels.txt   
--architecture="${ARCHITECTURE}"   
--image_dir=/home/ganesh/Documents/Developement/MachineLearning/new_approach/flower_photos_subset

Here image size I used is :- 224 and architecture:mobilenet_0.50_224

I tested the retrained model and it is working great using command:

python -m scripts.label_image \
--graph=tf_files/retrained_graph.pb  \
--image=/home/ganesh/Documents/Developement/MachineLearning/new_approach/flower_images/flower.jpeg

It gives correct result Then I converted it into Tensorflow lite model using :

toco \
--input_file=/tmp/output_graph.pb \
--output_file=/tmp/graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shape=1,224,224,3 \
--input_array=input \
--output_array=final_result \
--inference_type=FLOAT \
--input_data_type=FLOAT

Files got generated successfully But when I replaced same in the android application, It is getting crashed As I am able to test the the retrained model using command and it is giving the correct results, I feel there is problem in converting it into lite format(For Android)

The issue is that you are converting the TF model to a float tflite model via toco --input_data_type=FLOAT whereas the tflite poets 2 app feeds ByteBuffer input to the model (it converts the image Bitmap to a ByteBuffer ). Originally it used a quantized mobilenet tflite model which was expecting a Byte input. However, when you replaced that with your model, the model started expecting float but the app fed it Byte . Thus it crashed.

The other application that you mentioned TFMobile poets 2 app works correctly since it converts the bitmaps to float[] rather than a ByteBuffer .

So if you want to go back to your first application, you can quantize your retrained graph first in tensorflow and then tell toco as well that your input is quantized (see toco commandline examples here ) and then try it out again.

(This script is from the TensorFlow repository, but it is not included in the default installation).

python -m scripts.quantize_graph \
--input=tf_files/optimized_graph.pb \
--output=tf_files/rounded_graph.pb \
--output_node_names=final_result \
--mode=weights_rounded

However, note that quantizing the graph post training might result in a loss of accuracy. So definitely measure that. Another option is to insert fake quantization ops into the graph during training before converting to a quantized graph. That will ensure less loss of accuracy, but that is a lot more work.

我以相同的方式创建了自定义模型,这次我尝试使用不同的 android 应用程序(TFMobile),它有效:) 这是教程链接: here

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM