简体   繁体   English

神经网络模型能否使用加权均值(Sum)平方误差作为其损失函数?

[英]Can Neural Network model use Weighted Mean (Sum) Squared Error as its loss function?

I am nooby in this field of study and probably this is a pretty silly question. 我在这个研究领域很吵,这可能是一个非常愚蠢的问题。 I want to build a normal ANN, but I am not sure if I can use a weighted mean square error as the loss function. 我想构建一个普通的ANN,但我不确定是否可以使用加权均方误差作为损失函数。 If we are not treating each sample equally, I mean we care the prediction precision more for some of the categories of the samples more than the others, then we want to form a weighted loss function. 如果我们不是平等地处理每个样本,我的意思是我们更关心某些类别的样本的预测精度比其他样本更多,那么我们想要形成加权损失函数。 Lets say, we have a categorical feature ci , i is the index of the sample, and for simplicity, we assume that this feature takes binary value, either 0 or 1. So, we can form the loss function as 可以说,我们有一个分类特征cii是样本的索引,为简单起见,我们假设此特征采用二进制值,0或1.因此,我们可以将损失函数形成为

(ci + 1)(yi_hat - yi)^2

#and take the sum for all i

Are there going to be any problem with the back-propagation? 反向传播会有任何问题吗? I don't see any issue with calculating the gradient or updating the weights between layers. 我没有看到计算渐变或更新图层之间的权重有任何问题。 And, if no issue, how can I program this loss function in Keras? 而且,如果没有问题,我如何在Keras中编程这种损失功能? Because it seems that the loss function only takes two parameters, y_true and y_pred , how can I plug in the vector c ? 因为似乎损失函数只需要两个参数, y_truey_pred ,我怎么能插入向量c

There is absolutely nothing wrong with that. 这绝对没有错。 Functions can declare the constants withing themselves or even take the constants from an outside scope: 函数可以自己声明常量,甚至可以从外部范围中获取常量:

import keras.backend as K

c = K.constant([c1,c2,c3,c4,...,cn])

def weighted_loss(y_true,y_pred):
    loss = keras.losses.get('mse')
    return c * loss(y_true,y_pred)

Exactly like yours: 完全像你的:

def weighted_loss(y_true,y_pred):

    weighted = (c+1)*K.square(y_true-y_pred)
    return K.sum(weighted)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM