[英]Result of calculating precision and recall and related seem odd
我正在模擬一個檢索10個文檔的搜索引擎,但是其中只有5個是相關的。
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from sklearn.metrics import average_precision_score
from sklearn.metrics import roc_curve
from sklearn.metrics.ranking import _binary_clf_curve
y_true = np.array([True, True, False, True, False, True, False, False, False, True])
降低閾值以獲取更多文檔:
y_scores = np.array([1, .9, .8, .7, .6, .5, .4, .3, .2, .1])
現在獲得精度,召回率和閾值:
precisions, recalls, thresholds1 = precision_recall_curve(y_true, y_scores)
print("\nPresicions:")
for pr in precisions:
print('{0:0.2f}'.format(pr), end='; ')
print("\nRecalls:")
for rec in recalls:
print('{0:0.2f}'.format(rec), end='; ')
print("\nThresholds:")
for thr in thresholds1:
print('{0:0.2f}'.format(thr), end='; ')
輸出1
Presicions:
0.50; 0.44; 0.50; 0.57; 0.67; 0.60; 0.75; 0.67; 1.00; 1.00; 1.00;
Recalls:
1.00; 0.80; 0.80; 0.80; 0.80; 0.60; 0.60; 0.40; 0.40; 0.20; 0.00;
Thresholds:
0.10; 0.20; 0.30; 0.40; 0.50; 0.60; 0.70; 0.80; 0.90; 1.00;
情況2的輸出代碼:
falsePositiveRates, truePositiveRates, thresholds2 = roc_curve(y_true, y_scores, pos_label = True)
print("\nFPRs:")
for fpr in falsePositiveRates:
print('{0:0.2f}'.format(fpr), end='; ')
print("\nTPRs:")
for tpr in truePositiveRates:
print('{0:0.2f}'.format(tpr), end='; ')
print("\nThresholds:")
for thr in thresholds2:
print('{0:0.2f}'.format(thr), end='; ')
輸出2
FPRs:
0.00; 0.00; 0.20; 0.20; 0.40; 0.40; 1.00; 1.00;
TPRs:
0.20; 0.40; 0.40; 0.60; 0.60; 0.80; 0.80; 1.00;
Thresholds:
1.00; 0.90; 0.80; 0.70; 0.60; 0.50; 0.20; 0.10;
問題在輸出1中,為什么最后一個精度(將是繪圖中的第一個)計算為1而不是0?
在輸出2中,為什么FPR,TPR和閾值的長度是8,而不是10?
在output1中,為什么最后一個精度(將是繪圖的第一個精度)設置為1而不是0?
在最嚴格的閾值下,您只能選擇一項相關的選項(真正肯定)。
在輸出2中,為什么FPR,TPR,閾值的計數是8而不是10
您允許drop_intermediate默認為True
。 0.3和0.4是次優閾值。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.