[英]Why is the ROC_AUC from cross_val_score so much higher than manually using a StratfiedKFold with metrics.roc_auc_score for an XGB classifier?
skf = StratifiedKFold(n_splits=5, shuffle=False)
roc_aucs_temp = []
for i, (train_index, test_index) in enumerate(skf.split(X_train_xgb, y_train_xgb)):
X_train_fold, X_test_fold = X_train_xgb.iloc[train_index], X_train_xgb.iloc[test_index]
y_train_fold, y_test_fold = y_train_xgb[train_index], y_train_xgb[test_index]
xgb_temp.fit(X_train_fold, y_train_fold)
y_pred=model.predict(X_test_fold)
roc_aucs_temp.append(metrics.roc_auc_score(y_test_fold, y_pred))
print(roc_aucs_temp)
[0.8622474747474748, 0.8497474747474747, 0.9045918367346939, 0.8670918367346939, 0.879591836734694]
# this uses the same CV object as method 1
print(cross_val_score(xgb, X_train_xgb, y_train_xgb, cv=skf, scoring='roc_auc'))
[0.9614899 0.94861111 0.96045918 0.97270408 0.96977041]
我可能誤解了 cross_val_score 的功能,但根據我的理解,它創建了 K 倍的訓練和測試數據。 然后它在 K-1 折上訓練 model,並在 1 折上重復測試。 它的准確度應該與使用 StratifiedKFold 手動創建 K 折的准確度大致相同。 為什么不是呢?
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html
roc_auc_score的文檔表明它的第二個參數是label scores而不是 predicted labels 。 就像他們在示例中展示的那樣,您可能想要model.predict_proba(X_test_fold)[:, 1]
而不是model.predict(X_test_fold)
之類的東西。 cross_val_score
和roc_auc
正在以這種方式對其進行評估,這就是您看到差異的原因。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.