簡體   English   中英

R-集成矩陣中的機器學習。

[英]Machine Learning in R - confusion matrix of an ensemble

我正在嘗試訪問多個分類器的整體准確性(或confusionMatrix),但似乎找不到如何報告此信息。

已經嘗試過:

confusionMatrix(fits_predicts,reference=(mnist_27$test$y))

表中的錯誤(數據,引用,dnn = dnn,...):所有參數的長度必須相同

library(caret)
library(dslabs)
set.seed(1)
data("mnist_27")

models <- c("glm", "lda",  "naive_bayes",  "svmLinear", 
            "gamboost",  "gamLoess", "qda", 
            "knn", "kknn", "loclda", "gam",
            "rf", "ranger",  "wsrf", "Rborist", 
            "avNNet", "mlp", "monmlp",
            "adaboost", "gbm",
            "svmRadial", "svmRadialCost", "svmRadialSigma")

fits <- lapply(models, function(model){ 
  print(model)
  train(y ~ ., method = model, data = mnist_27$train)
}) 

names(fits) <- models

fits_predicts <- sapply(fits, function(fits){ predict(fits,mnist_27$test)
  })

我想報告不同模型之間的confusionMatrix。

您不訓練任何合奏 您只是在訓練幾個模型的列表,而沒有以任何方式組合它們,這絕對不是一個整體。

鑒於此,您得到的錯誤不是意外的,因為confusionMatrix期望單個預測(如果確實有一個合奏,就是這種情況),而不是多個。

為了簡化起見,只保留前四個模型的清單,並略微更改fits_predicts定義,以便提供一個數據fits_predicts ,即:

models <- c("glm", "lda",  "naive_bayes",  "svmLinear")

fits_predicts <- as.data.frame( sapply(fits, function(fits){ predict(fits,mnist_27$test)
}))

# rest of your code as-is

這是您如何獲得每個模型的混淆矩陣的方法

cm <- lapply(fits_predicts, function(fits_predicts){confusionMatrix(fits_predicts,reference=(mnist_27$test$y))
})

這使

> cm
$glm
Confusion Matrix and Statistics

          Reference
Prediction  2  7
         2 82 26
         7 24 68

               Accuracy : 0.75           
                 95% CI : (0.684, 0.8084)
    No Information Rate : 0.53           
    P-Value [Acc > NIR] : 1.266e-10      

                  Kappa : 0.4976         
 Mcnemar's Test P-Value : 0.8875         

            Sensitivity : 0.7736         
            Specificity : 0.7234         
         Pos Pred Value : 0.7593         
         Neg Pred Value : 0.7391         
             Prevalence : 0.5300         
         Detection Rate : 0.4100         
   Detection Prevalence : 0.5400         
      Balanced Accuracy : 0.7485         

       'Positive' Class : 2              


$lda
Confusion Matrix and Statistics

          Reference
Prediction  2  7
         2 82 26
         7 24 68

               Accuracy : 0.75           
                 95% CI : (0.684, 0.8084)
    No Information Rate : 0.53           
    P-Value [Acc > NIR] : 1.266e-10      

                  Kappa : 0.4976         
 Mcnemar's Test P-Value : 0.8875         

            Sensitivity : 0.7736         
            Specificity : 0.7234         
         Pos Pred Value : 0.7593         
         Neg Pred Value : 0.7391         
             Prevalence : 0.5300         
         Detection Rate : 0.4100         
   Detection Prevalence : 0.5400         
      Balanced Accuracy : 0.7485         

       'Positive' Class : 2              


$naive_bayes
Confusion Matrix and Statistics

          Reference
Prediction  2  7
         2 88 23
         7 18 71

               Accuracy : 0.795           
                 95% CI : (0.7323, 0.8487)
    No Information Rate : 0.53            
    P-Value [Acc > NIR] : 5.821e-15       

                  Kappa : 0.5873          
 Mcnemar's Test P-Value : 0.5322          

            Sensitivity : 0.8302          
            Specificity : 0.7553          
         Pos Pred Value : 0.7928          
         Neg Pred Value : 0.7978          
             Prevalence : 0.5300          
         Detection Rate : 0.4400          
   Detection Prevalence : 0.5550          
      Balanced Accuracy : 0.7928          

       'Positive' Class : 2               


$svmLinear
Confusion Matrix and Statistics

          Reference
Prediction  2  7
         2 81 24
         7 25 70

               Accuracy : 0.755           
                 95% CI : (0.6894, 0.8129)
    No Information Rate : 0.53            
    P-Value [Acc > NIR] : 4.656e-11       

                  Kappa : 0.5085          
 Mcnemar's Test P-Value : 1               

            Sensitivity : 0.7642          
            Specificity : 0.7447          
         Pos Pred Value : 0.7714          
         Neg Pred Value : 0.7368          
             Prevalence : 0.5300          
         Detection Rate : 0.4050          
   Detection Prevalence : 0.5250          
      Balanced Accuracy : 0.7544          

       'Positive' Class : 2       

您還可以訪問每個模型的各個混淆矩陣,例如lda

> cm['lda']
$lda
Confusion Matrix and Statistics

          Reference
Prediction  2  7
         2 82 26
         7 24 68

               Accuracy : 0.75           
                 95% CI : (0.684, 0.8084)
    No Information Rate : 0.53           
    P-Value [Acc > NIR] : 1.266e-10      

                  Kappa : 0.4976         
 Mcnemar's Test P-Value : 0.8875         

            Sensitivity : 0.7736         
            Specificity : 0.7234         
         Pos Pred Value : 0.7593         
         Neg Pred Value : 0.7391         
             Prevalence : 0.5300         
         Detection Rate : 0.4100         
   Detection Prevalence : 0.5400         
      Balanced Accuracy : 0.7485         

       'Positive' Class : 2   

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM