[英]getting roc_auc_score nan for cross validation of multiclass classification
I'm working on a multiclass classification problem with imbalanced data with 3 classes, I used stratifiedkfolds to split data and SMOTE method to oversample it.我正在研究一个多类分类问题,其中包含 3 个类的不平衡数据,我使用 layerifiedkfolds 拆分数据并使用 SMOTE 方法对其进行过采样。 when I use cross-validation to evaluate my models I get results for F1_score but for roc_auc I only get nan value
当我使用交叉验证来评估我的模型时,我得到了 F1_score 的结果,但对于 roc_auc 我只得到了 nan 值
for key, classifier in classifiers.items():
classifier.fit(X_sm, y_sm)
training_score1 = cross_val_score(classifier, X_sm, y_sm,scoring=make_scorer(f1_score, average='macro', labels=[2]), cv=5)
print("Classifiers: ", classifier.__class__.__name__, "Has a training score of", round(training_score1.mean(), 2) * 100, "% F1 score")
training_score2 = cross_val_score(classifier, X_sm, y_sm,scoring=make_scorer(roc_auc_score, average='macro',multi_class='ovo'), cv=5)
print("Classifiers: ", classifier.__class__.__name__, "Has a training score of", round(training_score2.mean(), 2) * 100, "% Roc_auc score")
X_sm and y_sm are both arrays and the results in this case are: X_sm 和 y_sm 都是 arrays 并且在这种情况下的结果是:
Classifiers: LogisticRegression Has a training score of 77.0 % F1 score
Classifiers: LogisticRegression Has a training score of nan % Roc_auc score
Classifiers: KNeighborsClassifier Has a training score of 94.0 % F1 score
Classifiers: KNeighborsClassifier Has a training score of nan % Roc_auc score
Classifiers: SVC Has a training score of 89.0 % F1 score
Classifiers: SVC Has a training score of nan % Roc_auc score
Classifiers: DecisionTreeClassifier Has a training score of 83.0 % F1 score
Classifiers: DecisionTreeClassifier Has a training score of nan % Roc_auc score
I tried to use cross_validate
but it's not working for me.我尝试使用
cross_validate
但它对我不起作用。
The auROC metric requires a continuous confidence measure, as opposed to the hard class predictions, so you need to set needs_proba=True
or needs_threshold=True
. auROC 指标需要连续的置信度测量,而不是硬的 class 预测,因此您需要设置
needs_proba=True
或needs_threshold=True
。 The latter uses the classifier's decision_function
or predict_proba
, whereas the first only tries to use predict_proba
;后者使用分类器的
decision_function
或predict_proba
,而第一个只尝试使用predict_proba
; since SVMs are not natively probabilistic, you'll want needs_threshold
.由于 SVM 本身不是概率性的,因此您需要
needs_threshold
。 (Do not set either of these for F1, which only uses hard class predictions.) (不要为 F1 设置其中任何一个,它只使用硬 class 预测。)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.