简体   繁体   English

如何显示混淆矩阵和每个交叉验证折叠的报告(召回率,精度,fmeasure)

[英]How to display confusion matrix and report (recall, precision, fmeasure) for each cross validation fold

I am trying to perform 10 fold cross validation in python. 我正在尝试在python中执行10折交叉验证。 I know how to calculate the confusion matrix and the report for split test(example split 80% training and 20% testing). 我知道如何计算混淆矩阵和分割测试的报告(例如分割80%训练和20%测试)。 But the problem is I don't know how to calculate the confusion matrix and report for each folds for example when fold-10, I just know code for average accuracy. 但是问题是我不知道如何计算混淆矩阵并为每个折页生成报告,例如当10折时,我只知道平均准确度的代码。

Here is a reproducible example with the breast cancer data and 3-fold CV for simplicity: 为简单起见,这是乳腺癌数据和3倍CV的可重现示例:

from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import KFold

X, y = load_breast_cancer(return_X_y=True)
n_splits = 3
kf = KFold(n_splits=n_splits, shuffle=True)
model = DecisionTreeClassifier()

for train_index, val_index in kf.split(X):
    model.fit(X[train_index], y[train_index])
    pred = model.predict(X[val_index])
    print(confusion_matrix(y[val_index], pred))
    print(classification_report(y[val_index], pred))

The result is 3 confusion matrices & classification reports, one per CV fold: 结果是3个混淆矩阵和分类报告,每CV折叠一个:

[[ 63   9]
 [ 10 108]]
              precision    recall  f1-score   support

           0       0.86      0.88      0.87        72
           1       0.92      0.92      0.92       118

   micro avg       0.90      0.90      0.90       190
   macro avg       0.89      0.90      0.89       190
weighted avg       0.90      0.90      0.90       190

[[ 66   8]
 [  6 110]]
              precision    recall  f1-score   support

           0       0.92      0.89      0.90        74
           1       0.93      0.95      0.94       116

   micro avg       0.93      0.93      0.93       190
   macro avg       0.92      0.92      0.92       190
weighted avg       0.93      0.93      0.93       190

[[ 59   7]
 [  8 115]]
              precision    recall  f1-score   support

           0       0.88      0.89      0.89        66
           1       0.94      0.93      0.94       123

   micro avg       0.92      0.92      0.92       189
   macro avg       0.91      0.91      0.91       189
weighted avg       0.92      0.92      0.92       189

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 获得一个分类报告,说明使用10倍交叉验证的多项式朴素贝叶斯的类精确度和召回率 - Get a classification report stating the class wise precision and recall for multinomial Naive Bayes using 10 fold cross validation 如何计算 K 折交叉验证的不平衡数据集的精度、召回率和 f1 分数? - How to compute precision,recall and f1 score of an imbalanced dataset for K fold cross validation? 如何在scikit-learn中使用k折交叉验证来获得每折的精确召回率? - How can I use k-fold cross-validation in scikit-learn to get precision-recall per fold? 在 KFold 交叉验证的情况下如何显示平均分类报告和混淆矩阵 - How to display mean classification report and confusion matrix in case of KFold cross validation 在scikit中进行10倍交叉验证的混淆矩阵学习 - Confusion Matrix for 10-fold cross validation in scikit learn 对多类问题执行 K 折交叉验证,评分 = 'f1 or Recall or Precision' - performing K-fold Cross Validation with scoring = 'f1 or Recall or Precision' for multi-class problem 应用分层 10 折交叉验证时,如何获取 python 中所有混淆矩阵的聚合 - How to get the aggregate of all the confusion matrix in python when Stratified 10 fold cross validation is applied 任何 sklearn 模块都可以在 k 折交叉验证中返回负类的平均精度和召回分数吗? - Can any sklearn module return average precision and recall scores for negative class in k-fold cross validation? 带有 n 折交叉验证的 Precision Recall 曲线显示标准偏差 - Precision Recall curve with n-fold cross validation showing standard deviation Keras分类器的Sklearn精度,召回率和FMeasure度量 - Sklearn Metrics of precision, recall and FMeasure on Keras classifier
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM