简体   繁体   English

如何防止keras自定义损失函数中的负面预测

[英]how to prevent negative predictions in keras custom loss function

I'm using a custom loss function: 我正在使用自定义损失函数:

def ratio_loss(y, y0):
   return (K.mean(K.abs(y - y0) / y))

and get negative predicted values - which in my case doesn't makes scene (I use CNN and regression as last layer to get a length of an object). 并获得负的预测值-在我的情况下,这不会产生场景(我使用CNN和回归作为最后一层来获取对象的长度)。 I used division in order to penalize more where the true value is relative small to the predicted). 我使用除法是为了对真实值相对于预测值相对较小的地方加重惩罚。

how can i prevent the negative predictions ? 我如何防止负面预测?

this is the mode (for now..): 这是模式(目前是..):

def create_model():
    model = Sequential()
    model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(128, 128, 1)))
    model.add(Dropout(0.5))

    model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same'))
    model.add(Dropout(0.25))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'))
    model.add(Dropout(0.25))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))
    #
    #
    model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'))
    model.add(Dropout(0.25))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2, 2)))

    model.add(Flatten())
    model.add(Dense(512, activation='relu'))

    model.add(Dropout(0.15))
    model.add(Dense(1))
    #model.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adadelta(), metrics=[sacc])
    model.compile(loss=ratio_loss, optimizer=keras.optimizers.Adadelta(), metrics=[sacc])
    return model

Thanks, Amir 谢谢阿米尔

def ratio_loss(y, y0):
    return (K.mean(K.abs(y - y0 / y)))

But what is the range of your expected output? 但是您的预期输出范围是多少?

You should probably be using some activation function at the end such as: 您可能应该在最后使用一些激活功能,例如:

  • activation ='sigmoid' - from 0 to 1 activation ='sigmoid'从0到1
  • activation = 'tanh' - from -1 to +1 activation = 'tanh'从-1到+1
  • activation = 'softmax' - if it's a classification problem with only one correct class activation = 'softmax'如果是只有一个正确类别的分类问题
  • actication = 'softplus' - from 0 to +inf. actication = 'softplus'从0到+ inf。
  • etc. 等等

Usage in the last layer: 最后一层的用法:

model.add(Dense(1,activation='sigmoid')) #from 0 to 1

#optional, from 0 to 200 after using the sigmoid above
model.add(Lambda(lambda x: 200*x))

Hint: if you're a starter, avoid using too much "relu", it often gets stuck in 0 and must be used with carefully selected learning rates. 提示:如果您是入门者,请避免使用过多的“ relu”,它经常卡在0中,并且必须与精心选择的学习率一起使用。

You could continue training your neural network, and hopefully it will learn not to make any prediction below 0 (assuming all of the training data has output below 0). 您可以继续训练您的神经网络,希望它会学会不做任何低于0的预测(假设所有训练数据的输出都低于0)。 You could then add a post-prediction step where you turn an And if it makes any predictions below 0, then you can just convert it to 0. 然后,您可以在其中添加一个后预测步骤,然后将其变为0。如果它做出的预测低于0,则可以将其转换为0。

You could add an activation function as Daniel Möller answered. 您可以添加激活功能,如DanielMöller回答的那样。

That would involve changing 那将涉及改变

model.add(Dense(1))

to

model.add(Dense(1, activation='softplus'))

since you mentioned you wanted the output to be from 0 to ~200 in a comment. 因为您提到过,所以您希望在注释中将输出从0到〜200。 This would guarantee there's not output below 0. 这样可以保证输出不会低于0。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM