简体   繁体   中英

Tensorflow Lite iOS Camera example does not work with retrained MobileNet model

I'm trying to run the Tensorflow Lite Camera example with a retrained Mobilenet model.

I successfully run the iOS camera app according to the instructions and this fix . The app works as expected with the model mobilenet_v1_1.0_224.tflite .

I install Tensorflow:

pip3 install -U virtualenv
virtualenv --system-site-packages -p python3 ./venv
source ./venv/bin/activate
pip install --upgrade pip
pip install --upgrade tensorflow==1.12.0
pip install --upgrade tensorflow-hub==40.6.2

I now want to retrain the model using the flowers set . I download the flowers folder and run:

python retrain.py \
    --bottleneck_dir=bottleneck \
    --how_many_training_steps=400 \
    --model_dir=model \
    --output_graph=pola_retrained.pb \
    --output_labels=pola_retrained_labels.txt \
    --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/quantops/feature_vector/1 \
    --image_dir=flower_photos

Note: I can successfully test the retrained model using label_image.py script.

I convert the retrained model to its tflite format:

toco \
  --graph_def_file=pola_retrained.pb \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --output_file=mobilenet_v1_1.0_224.tflite \
  --inference_type=FLOAT \
  --input_type=FLOAT \
  --input_arrays=Placeholder \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3

I copy both the new model and the labels file to the iOS app. I modify the app parameters in CameraExampleViewController.mm as follows:

// These dimensions need to match those the model was trained with.
const int wanted_input_width = 224;
const int wanted_input_height = 224;
const int wanted_input_channels = 3;
const float input_mean = 128.0f;
const float input_std = 128.0f;
const std::string input_layer_name = "input";
const std::string output_layer_name = "final_result";

The app crashes. The index of the recognized object is outside of the range of trained objects . The confidence level is above 1.

The Tensorflow Lite Camera example hardcodes the output tensor size to 1000 . If you test the example with a retrained model having a smaller number of outputs, the iOS app crashes. Replace the following code in CameraExampleViewController.mm :

const int output_size = 1000;

with:

// read output size from the output sensor
const int output_tensor_index = interpreter->outputs()[0];
TfLiteTensor* output_tensor = interpreter->tensor(output_tensor_index);
TfLiteIntArray* output_dims = output_tensor->dims;
assert(output_dims->size == 2);
const int output_size = output_dims->data[1]-output_dims->data[0];

The code above fixes the problem by reading the output size from the model output dimensions. The appropriate PR has been submitted.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM