[英]CNN audio classifier trained with 3 classes and the sum of the prediction should be less than one
I built a CNN audio classifier with 3 classes.我构建了一个包含 3 个类的 CNN 音频分类器。 My problem is that their are more than 3 classes, eg a fourth could be "noise".
我的问题是他们的课程超过 3 个,例如第四个可能是“噪音”。 So when I call be prediction the sum of these 3 classes is always 1.
因此,当我称之为预测时,这 3 个类的总和始终为 1。
prediction = model.predict([X])
Is it somehow possible to extract the accuracy of each class so the sum of these accuracies is less then 1?是否可以以某种方式提取每个 class 的精度,因此这些精度的总和小于 1?
If you use a softmax activation function you are forcing the outputs to sum to 1, thereby making a relative confidence score between your classes.如果您使用 softmax 激活 function 您将强制输出总和为 1,从而在您的类之间产生相对置信度得分。 Perhaps, without knowing more about your data and application, a "1 vs all" type scheme would work better for your purposes.
也许,在不了解您的数据和应用程序的情况下,“1 vs all”类型的方案会更好地满足您的目的。 For example, each class could have a sigmoid activation function and you could pick the highest prediction but if that prediction doesn't score high enough on a sensitivity threshold then none of the classes are predicted and as such is empty or implicitly "noise."
例如,每个 class 可能有一个 sigmoid 激活 function 并且您可以选择最高预测,但如果该预测在灵敏度阈值上得分不够高,则不会预测任何类,因此是空的或隐含的“噪声”。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.