简体   繁体   中英

Output predicted bounding boxes trained on TensorFlow Object Detection API

i used the Google TensorFlow Object detection API [ https://github.com/tensorflow/models][1] to train on my own dataset using Faster RCNN inception v2 model and bty writing some of my own scripts in python 3. It works fairly well on my videos and now I want to output the predicted bounding boxes to calculate mAP. IS there any way to do this?

I have three files generated from training:

  1. model.ckpt-6839.data-00000-of-00001
  2. model.ckpt-6839.index
  3. model.ckpt-6839.meta

Is the predicted boxes contained in one of these files? Or are they stored somewhere else? Or do they need to be coded separately for the coordinates to be extracted?

The files you listed are checkpoint files, you can use then to export a frozen graph and then do prediction of input images.

Once you obtained the frozen graph, you can then use this file object_detection_tutorial.ipynb to do prediction of input images. In this file, the function run_inference_for_single_image will return a output dict for each image and it contains detection boxes in it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM