[英]Specificity at different thresholds (in the same way as sklearn.metrics.precision_recall_curve)
I would like to get the specificities in the same way as precisions and recalls is given by precision_recall_curve
.我想以与 precision_recall_curve 给出的precision_recall_curve
和召回率相同的方式获得特异性。
precisions, recalls, thresholds = sklearn.metrics.precision_recall_curve(ground_truth, predictions)
How can I achieve that?我怎样才能做到这一点?
So, I looked at the source code for sklearn.metrics.precision_recall_curve
( https://github.com/scikit-learn/scikit-learn/blob/2e90b897768fd360ef855cb46e0b37f2b6faaf72/sklearn/metrics/_ranking.py ) and altered it to fit my needs.所以,我查看了sklearn.metrics.precision_recall_curve
的源代码( https://github.com/scikit-learn/scikit-learn/blob/2e90b897768fd360ef855cb46e0b37f2b6faaf72/sklearn/metrics/_ranking.py )并改变了我的需要.
import numpy as np
from sklearn.metrics.ranking import _binary_clf_curve
def specificity_sensitivity_curve(y_true, probas_pred):
"""
Compute specificity-sensitivity pairs for different probability thresholds.
For reference, see 'precision_recall_curve'
"""
fps, tps, thresholds = _binary_clf_curve(y_true, probas_pred)
sensitivity = tps / tps[-1]
specificity = (fps[-1] - fps) / fps[-1]
last_ind = tps.searchsorted(tps[-1])
sl = slice(last_ind, None, -1)
return np.r_[specificity[sl], 1], np.r_[sensitivity[sl], 0], thresholds[sl]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.