简体   繁体   中英

micro macro and weighted average all have the same precision, recall, f1-score

I've been using different machine learning classifiers to conduct a sentiment analysis based on positive, neutral and negative sentiments. when trying to see the classification metrics of a classifier whilst using Sklearns classification report, the micro macro and weighted average all have the same precision, recall, f1-score. why could this be happening?

the code to print the classification report is:

print(classification_report(y_test, y_pred, target_names=['0','1','2']))

the results can be seen here

Since the number of samples in your classes are fairly similar and the precision, recall within each class are also fairly similar, I believe that the similarity in averages is coincidental. If you use precision_recall_fscore_support , you should find that the values are slightly different, and rounding to two significant digits makes them look the same.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM