简体   繁体   English

Accuracy 和 Recall 值相同

[英]Accuracy and Recall values are same

I have trained CNN models using Pytorch with Python programming language, I try to obtain metrics from the test data set by using sklearn.metrics as shown below.我已经使用 Pytorch 和 Python 编程语言训练了 CNN 模型,我尝试使用 sklearn.metrics 从测试数据集中获取指标,如下所示。 But I got same result for accuracy and recall.但我在准确性和召回率方面得到了相同的结果。 Is there any best practices to show metrics?是否有任何显示指标的最佳实践? Does this result true?这个结果是真的吗?

test_accuracy_score = accuracy_score(output_list, prediction_list)
test_precision_score = precision_score(output_list, prediction_list, average=‘weighted’)
test_f1_score = f1_score(output_list, prediction_list, average=‘weighted’)
test_recall_score = recall_score(output_list, prediction_list, average=‘weighted’)

If you want a detailed result just import classification report and print :如果您想要详细的结果,只需导入分类报告并打印

print(classification_report(output_list, prediction_list)) which also shows the support for each class (number of instances). print(classification_report(output_list, prediction_list))还显示了对每个 class(实例数)的支持。

Your weighted recall returns the recall for each class adjusted for the number of elements on each class.您的加权召回返回每个 class 的召回,该召回针对每个 class 上的元素数量进行了调整。 So we have accuracy which is defined as:所以我们有accuracy定义为:

acc = (TP + TN) / (TP + TN + FP + FN) -- T for true P for positive etc... acc = (TP + TN) / (TP + TN + FP + FN) -- T 表示真 P 表示正等...

and recall defined for classA and classB (let's assume binary classification)并为 classA 和 classB 定义召回(假设二元分类)

recall1 = TP / (TP + FN) for class1 recall1 = TP / (TP + FN)对于第 1 类

recall2 = TP / (TP + FN) for class2 which translates to TN / (TN + FP) (if you imagine that class1 is positive and class2 is negative) recall2 = TP / (TP + FN)对于 class2 转换为TN / (TN + FP) (如果你想象 class1 是积极的,而 class2 是消极的)

weighted recall returns the number加权召回返回数字

w_recall = ElementsOfClass1overAll * recall1 + ElementsOfClass2overAll * recall2 = ElementsOfClass1overAll * (TP / (TP + FN)) + ElementsOfClass2overAll * (TN / (TN + FP)) w_recall = ElementsOfClass1overAll * recall1 + ElementsOfClass2overAll * recall2 = ElementsOfClass1overAll * (TP / (TP + FN)) + ElementsOfClass2overAll * (TN / (TN + FP))

So, we can say that if ElementsOfClass1overAll and ElementsOfClass2overAll are equal then w_recall is exactly equal to your accuracy (meaning if half the elements belong to class A and half to classB).所以,我们可以说,如果 ElementsOfClass1overAll 和 ElementsOfClass2overAll 相等,那么 w_recall 正好等于您的准确度(这意味着如果一半的元素属于 class A 而一半属于 B 类)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 SVM、朴素贝叶斯和随机森林的准确度、精确度、召回率 RMSE 和 MAE 值相同 - Accuracy, Precision, Recall RMSE and MAE values are same for SVM, Naive Bayes and Random Forest Keras 2.3.0 指标准确度、精度和召回率的值相同 - Same value for Keras 2.3.0 metrics accuracy, precision and recall RFECV 中的准确度值完全相同 - Exactly same values for accuracy in RFECV 召回后的分类精度和精度 - Classification accuracy after recall and precision 为什么 sklearn 在二分类中返回准确率和加权平均召回率相同的值? - Why sklearn returns the accuracy and weighted-average recall the same value in binary classification? 我想计算数据集中的精度、召回率和准确率 - I want to calculate Precision, Recall and Accuracy in a dataset 为什么我在所有 10 次折叠中得到相同的测试准确度和平衡准确度值? - Why am I getting the same values for test accuracy and balenced accuracy for all the 10 folds? 相同的测试和预测值给出 0 精度、召回率和 NER 的 f1 分数 - Same test and prediction values gives 0 precision, recall, f1 score for NER model 中的精度和召回率相同 - Precision and recall are the same within a model 精确度、召回率、精确度和 f-measure 的分数小数 - Fraction decimal for accuracy, recall, precision and f-measure
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM