简体   繁体   English

自定义损失 function 带渐变

[英]Custom loss function with gradient

I have created a custom loss function which also computes the mean squared error between the gradient of true and predicted labels.我创建了一个自定义损失 function,它还计算真实标签和预测标签的梯度之间的均方误差。 The function is given below. function 如下所示。 However, while debugging the code I found that uxp and uxt are the list and not tensors.但是,在调试代码时,我发现 uxp 和 uxt 是列表而不是张量。 Am I doing any mistake in computing the gradients?我在计算梯度时做错了吗?

def custom_mean_squared_error(y_true, y_pred):
    mse = K.mean(K.square(y_pred - y_true), axis=-1)

    # gradient
    xs = tf.ones_like(y_pred)
    uxp = tf.gradients(y_pred, xs)
    uxt = tf.gradients(y_true, xs)
    grad_mse = K.mean(K.square(uxp - uxt), axis=-1)

    mse1 = mse + grad_mse
    return mse1

Thank you.谢谢你。

Yes, the lists cannot be subtracted in Python (try [1,2,3] - [-1,-2,-3]).是的,不能在 Python 中减去列表(尝试 [1,2,3] - [-1,-2,-3])。 And yes, tf.gradients should return the list of tensors (see documentation https://www.tensorflow.org/api_docs/python/tf/gradients ).是的, tf.gradients应该返回张量列表(参见文档https://www.tensorflow.org/api_docs/python/tf/gradients )。 You can try list comprehension to implement subtraction.您可以尝试列表推导来实现减法。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM