I am using Tensorflow Object Detection API to finetune a pretrained model from the model zoo for custom object detection. Once my model is converged I use eval_util.py
with EvalConfig.metrics_set='open_images_V2_detection_metrics'
to obtain the mAP
(and class-specific AP
s) which lets me measure the quality of my model.
But just mAP
is not enough for my purposes. For better analysis, I want to know the exact breakdown of my model's results into false positives, false negatives and true positives. I wish to be able to see this breakdown in terms of actual test images - that is, I want to see my test images being physically divided into those three groups, automatically.
How can I do that?
I tried searching through Tensorflow's offical documentation and, to some extent, through the relevant python files on github, but I haven't found a way yet.
I think what you are looking for is a confusion matrix. Take a look at this link: Tensorflow Confusion Matrix
You can basically evaluate your predictions with this function.
We also meet this problem. Now we find some clues in object_detection/utils/metrics.py. Maybe you can have a try. Hope you can share your solutions!
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.