简体   繁体   中英

Scikit-learn, get accuracy scores for each class

Is there a built-in way for getting accuracy scores for each class separatetly? I know in sklearn we can get overall accuracy by using metric.accuracy_score . Is there a way to get the breakdown of accuracy scores for individual classes? Something similar to metrics.classification_report .

from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score

y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']

classification_report does not give accuracy scores:

print(classification_report(y_true, y_pred, target_names=target_names, digits=4))

Out[9]:         precision    recall  f1-score   support

class 0     0.5000    1.0000    0.6667         1
class 1     0.0000    0.0000    0.0000         1
class 2     1.0000    0.6667    0.8000         3

avg / total     0.7000    0.6000    0.6133         5

Accuracy score gives only the overall accuracy:

accuracy_score(y_true, y_pred)
Out[10]: 0.59999999999999998
from sklearn.metrics import confusion_matrix
y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]
matrix = confusion_matrix(y_true, y_pred)
matrix.diagonal()/matrix.sum(axis=1)

You can use sklearn's confusion matrix to get the accuracy

from sklearn.metrics import confusion_matrix
import numpy as np

y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']

#Get the confusion matrix
cm = confusion_matrix(y_true, y_pred)
#array([[1, 0, 0],
#   [1, 0, 0],
#   [0, 1, 2]])

#Now the normalize the diagonal entries
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#array([[1.        , 0.        , 0.        ],
#      [1.        , 0.        , 0.        ],
#      [0.        , 0.33333333, 0.66666667]])

#The diagonal entries are the accuracies of each class
cm.diagonal()
#array([1.        , 0.        , 0.66666667])

References

I am adding my answer as I haven't found any answer to this exact question online, and because I think that the other calculation methods suggested here before me are incorrect.

Remember that accuracy is defined as:

accuracy = (true_positives + true_negatives) / all_samples

Or to put it into words; it is the ratio between the number of correctly classified examples (either positive or negative) and the total number of examples in the test set.

One thing that is important to note is that for both TN and FN, the "negative" is class agnostic, meaning "not predicted as the specific class in question". For example, consider the following:

y_true = ['cat', 'dog', 'bird', 'bird]
y_pred = ['cat', 'dog', 'cat', 'dog']

Here, both the second 'cat' prediction and the second 'dog' prediction are false negatives simply because they are not 'bird'.

To your question:

As far as I know, there is currently no package that provides a method that does what you are looking for, but based on the definition of accuracy, we can use the confusion matrix method from sklearn to calculate it ourselves.

from sklearn.metrics import confusion_matrix
import numpy as np

# Get the confusion matrix
cm = confusion_matrix(y_true, y_pred)

# We will store the results in a dictionary for easy access later
per_class_accuracies = {}

# Calculate the accuracy for each one of our classes
for idx, cls in enumerate(classes):
    # True negatives are all the samples that are not our current GT class (not the current row) 
    # and were not predicted as the current class (not the current column)
    true_negatives = np.sum(np.delete(np.delete(cm, idx, axis=0), idx, axis=1))
    
    # True positives are all the samples of our current GT class that were predicted as such
    true_positives = cm[idx, idx]
    
    # The accuracy for the current class is ratio between correct predictions to all predictions
    per_class_accuracies[cls] = (true_positives + true_negatives) / np.sum(cm)

The original question was posted a while ago but this might help anyone who comes here through Google, like me.

You can code it by yourself : the accuracy is nothing more than the ratio between the well classified samples (true positives and true negatives) and the total number of samples you have.

Then, for a given class, instead of considering all the samples, you only take into account those of your class.

You can then try this: Let's first define a handy function.

def indices(l, val):
   retval = []
   last = 0
   while val in l[last:]:
           i = l[last:].index(val)
           retval.append(last + i)
           last += i + 1   
   return retval

The function above will return the indices in the list l of a certain value val

def class_accuracy(y_pred, y_true, class):
    index = indices(l, class)
    y_pred, y_true = ypred[index], y_true[index]
    tp = [1 for k in range(len(y_pred)) if y_true[k]==y_pred[k]]
    tp = np.sum(tp)
    return tp/float(len(y_pred))

The last function will return the in-class accuracy that you look for.

Your question makes no sense. Accuracy is a global measure, and there is no such thing as class-wise accuracy. The suggestions to normalize by true cases (rows) yields something called true-positive rate, sensitivity or recall, depending on the context. Likewise, if you normalize by prediction (columns), it's called precision or positive predictive value.

In my opinion, accuracy is generic term that has different dimensions, eg precision, recall, f1-score, (or even specificity, sensitivity), etc. that provide accuracy measures in different perspectives. Hence, the function 'classification_report' outputs a range of accuracy measures for each class. For instance, precision provides the proportion of accurately retrieved instances (ie true positives) with total number of instances (both true positives and false negatives) available in a particular class.

The question is misleading. Accuracy scores for each class equal the overall accuracy score. Consider the confusion matrix:

from sklearn.metrics import confusion_matrix
import numpy as np

y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]

#Get the confusion matrix
cm = confusion_matrix(y_true, y_pred)
print(cm)

This gives you:

 [[1 0 0]
  [1 0 0]
  [0 1 2]]

Accuracy is calculated as the proportion of correctly classified samples to all samples:

accuracy = (TP + TN) / (P + N)

Regarding the confusion matrix, the numerator (TP + TN) is the sum of the diagonal. The denominator is the sum of all cells. Both are the same for every class.

here the solution bro:

def classwise_accuracy():
   a = pd.crosstab(y_test,predict_over)
   print(a.max(axis=1)/a.sum(axis=1))
classwise_accuracy()

For the multilabel case you can use this

from sklearn.metrics import multilabel_confusion_matrix

def get_accuracies(true_labels, predictions):
    #https://scikit-learn.org/stable/modules/generated/sklearn.metrics.multilabel_confusion_matrix.html
    cm = multilabel_confusion_matrix(true_labels, predictions)
    total_count = true_labels.shape[0]
    accuracies = []
    for i in range(true_labels.shape[1]):
        true_positive_count = np.sum(cm[i,1,1]).item()
        true_negative_count = np.sum(cm[i,0,0]).item()
        accuracy = (true_positive_count + true_negative_count) / total_count
        accuracies.append(accuracy)
    return accuracies

No, There is no built-in way for getting accuracy scores for each class separately. But you can use the following snippet to get accuracy, sensitivity, and specificity.

def class_matric(confusion_matrix, class_id):
    """
    confusion matrix of multi-class classification
    
    class_id: id of a particular class 
    
    """
    confusion_matrix = np.float64(confusion_matrix)
    TP = confusion_matrix[class_id,class_id]
    FN = np.sum(confusion_matrix[class_id]) - TP
    FP = np.sum(confusion_matrix[:,class_id]) - TP
    TN = np.sum(confusion_matrix) - TP - FN - FP
    
    # sensitivity = 0 if TP == 0
    if TP != 0:
        sensitivity = TP/(TP+FN)
    else:
        sensitivity = 0.
    
    specificity = TN/(TN+FP)
    accuracy = (TP+TN)/(TP+FP+FN+TN)
    
    return sensitivity, specificity, accuracy

Upvote if you found it helpful.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM