For example, I trained a Bayes(SVM, RandomForest or something else) model with below score:
Model:
precision recall f1-score support
neg 0.0622 0.9267 0.1166 191
pos 0.9986 0.7890 0.8815 12647
avg / total 0.98 0.79 0.87 12838
My boss tell me that precision of neg
is too low and he can accept recall by 60%, no need so high. So I need a way to get best precision by limiting recall at 60% .But I didn't find similar feature in sklearn.
Is there any way to train a model with best precision
while recall can be limited to a specific value? (Or to reach 20% precision on neg
, don't care recall)
sklearn implements precision-recall tradeoff as follows: http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
One method is to use precision_recall_curve()
and then find a point on the graph with your desired recall.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.