[英]Error in model.fit() when using custom loss function
I have defined a custom loss function for my model.我为我的 model 定义了自定义损失 function。
def get_loss(y_hat, y):
loss = tf.keras.losses.BinaryCrossentropy(y_hat,y) # cross entropy (but no logits)
y_hat = tf.math.sigmoid(y_hat)
tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2])
fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2])
fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2])
loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10))) # fscore
return loss
When fitting my model to my training data I get following error:将我的 model 拟合到我的训练数据时,出现以下错误:
TypeError: Expected float32, got <tensorflow.python.keras.losses.BinaryCrossentropy object at 0x7feca46d0d30> of type 'BinaryCrossentropy' instead.
How can I fix this?我怎样才能解决这个问题? I already tried to use:
我已经尝试使用:
loss=tf.int32(tf.keras.losses.BinaryCrossentropy(y_hat,y)
but this spits out another error and seems to not be the solution I need但这吐出了另一个错误,似乎不是我需要的解决方案
You need to call the instantiated object, rather than passing the input as arguments.您需要调用实例化的 object,而不是将输入作为 arguments 传递。 As such:
像这样:
loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y)
Notice the extra set of parentheses.注意额外的一组括号。 Or, do it like this:
或者,这样做:
loss = tf.keras.losses.binary_crossentropy(y_hat, y)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.