简体   繁体   中英

sklearn metrics classification_report version output

So I am doing some machine learning with Python using Jupiter notebook and I have a problem with the output format with sklearn classification_report. There are two version. One which is 0.18.2 and the other is 0.20.3. The 20.3 version has the following output with my code:

from sklearn.metrics import classification_report
final=(classification_report(y_test, predictions))
print(final) 


            precision  recall   fl-score  support   
Female        0.47.      0.21.      0.34.   26 
Male.         0.71       0.85.      0.78.   55 

micro avg     0.67.      0.67.      0.67.   81 
macro avg.    0.59.      0.56       0.56.   81 
weighted avg  0.63.      0.67.      0.64.   81          

however, I want the following output to be like this:

              precision  recall   fl-score  support   
Female        0.47.      0.21.      0.34.   26 
Male.         0.71       0.85.      0.78.   55 

avg/total.    0.63.      0.67.      0.64.   81  

The above output is the 0.18.2 version of sklearn classification report which is not running with my version for some reason. The syntax for output is the same in both 0.18.2 and 0.20.3. Is there a way to switch versions back and forth in Jupiter notebook? Any advice would be appreciated.

You can use the option of classification_report to get a dictionary instead of a string as return value which you can then manipulate according to your needs. This is the relevant parameter:

output_dict : bool (default = False) If True, return output as dict

(see here for v0.20 documentation)

And this one way to then change the output to your requirements:

# get report as dict instead of a string
report = classification_report(y_test, predictions, output_dict=True)
# delete entries for keys "micro avg" and "macro avg" from report dict
del report["micro avg"]
del report["macro avg"]
# rename dict key "weighted avg" to "avg total"
report["avg/total"] = report.pop("weighted avg")
print(pd.DataFrame(report).transpose())

the output should look like this (tested with v0.21.3*):

            precision  recall   fl-score  support   
Female        0.47      0.21      0.34      26 
Male          0.71      0.85      0.78      55 
avg/total     0.63      0.67      0.64      81 

*in v0.21.3 you need to use del report["accuracy"] instead of del report["micro avg"] since metric names have changed```

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM