简体   繁体   中英

Select only classes with best metric (f1 score) in a multiclass classification problem

I have a multiclass classification problem with almost 50 classes. After I ran the models some of the classes get ver good scores (.70 and higher) and others perform badly.

What I want to do, is based on the metrics I obtain, keep only classes with good results and create a model only for them .

How can I pick the good classes out of the result of the classification report of my model?

This are the classes I want to extract and keep

在此处输入图像描述

classification_report has an output_dict parameter that causes the function to return a dictionary instead of a string.

If you have a threshold (eg 0.7 ) for good f1-scores, you can iterate over the results and select the labels with values higher than the threshold:

from sklearn.metrics import classification_report

y_true = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3]
y_pred = [0, 1, 2, 0, 0, 1, 4, 3, 1, 1, 2, 2, 2, 3, 2, 1, 3, 3, 3]
labels = [0, 1, 2, 3]

cr = classification_report(y_true, y_pred, output_dict=True)

for l in labels:
    if (f1_score := cr[str(l)]["f1-score"]) > 0.7:
        print(f"Label {l}, f1-score: {f1_score:.3f}")

Output:

Label 0, f1-score: 0.750
Label 2, f1-score: 0.800

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM