[英]How to make sklearn model reach a predefine precision or recall on some class?
For example, I trained a Bayes(SVM, RandomForest or something else) model with below score: 例如,我用以下分数训练了贝叶斯(SVM,RandomForest或其他东西)模型:
Model:
precision recall f1-score support
neg 0.0622 0.9267 0.1166 191
pos 0.9986 0.7890 0.8815 12647
avg / total 0.98 0.79 0.87 12838
My boss tell me that precision of neg
is too low and he can accept recall by 60%, no need so high. 我的老板告诉我, neg
精度太低,可以接受召回率的60%,不需要那么高。 So I need a way to get best precision by limiting recall at 60% .But I didn't find similar feature in sklearn. 因此,我需要一种通过将召回率限制为60%来获得最佳精度的方法。但是我没有在sklearn中找到类似的功能。
Is there any way to train a model with best precision
while recall can be limited to a specific value? 有什么方法可以以最precision
训练模型,而召回率可以限制为特定值? (Or to reach 20% precision on neg
, don't care recall) (或达到neg
20%的精度,不在乎召回)
sklearn implements precision-recall tradeoff as follows: http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html sklearn如下实现精确调用权衡: http ://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
One method is to use precision_recall_curve()
and then find a point on the graph with your desired recall. 一种方法是使用precision_recall_curve()
,然后在图形上找到具有所需召回率的点。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.