[英]Select only classes with best metric (f1 score) in a multiclass classification problem
I have a multiclass classification problem with almost 50 classes.我有近 50 个类的多类分类问题。 After I ran the models some of the classes get ver good scores (.70 and higher) and others perform badly.在我运行模型后,一些课程获得了很好的分数(0.70 或更高),而其他课程则表现不佳。
What I want to do, is based on the metrics I obtain, keep only classes with good results and create a model only for them .我想做的是根据我获得的指标,只保留成绩好的课程,并只为他们创建一个 model 。
How can I pick the good classes out of the result of the classification report of my model?如何从我的 model 的分类报告结果中挑选出好的类?
This are the classes I want to extract and keep这是我要提取并保留的类
classification_report
has an output_dict
parameter that causes the function to return a dictionary instead of a string. classification_report
有一个output_dict
参数,它导致 function 返回字典而不是字符串。
If you have a threshold (eg 0.7
) for good f1-scores, you can iterate over the results and select the labels with values higher than the threshold:如果你有一个好的f1 分数的阈值(例如0.7
),你可以迭代结果和 select 值高于阈值的标签:
from sklearn.metrics import classification_report
y_true = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3]
y_pred = [0, 1, 2, 0, 0, 1, 4, 3, 1, 1, 2, 2, 2, 3, 2, 1, 3, 3, 3]
labels = [0, 1, 2, 3]
cr = classification_report(y_true, y_pred, output_dict=True)
for l in labels:
if (f1_score := cr[str(l)]["f1-score"]) > 0.7:
print(f"Label {l}, f1-score: {f1_score:.3f}")
Output: Output:
Label 0, f1-score: 0.750
Label 2, f1-score: 0.800
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.