[英]How can I express this custom loss function in tensorflow?
I've got a loss function that fulfills my needs, but is only in PyTorch.我有一个满足我需求的损失 function,但仅在 PyTorch 中。 I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line:我需要将它实现到我的 TensorFlow 代码中,但是虽然其中大部分都可以简单地“翻译”,但我仍然坚持使用特定的行:
y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid
You can see the whole code in following and it is indeed pretty straight forward except for that line:您可以在下面看到整个代码,除了该行之外,它确实非常简单:
def get_loss(y_hat, y):
# No loss on diagonal
B, N, _ = y_hat.shape
y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid
# calc loss
loss = F.binary_cross_entropy_with_logits(y_hat, y) # cross entropy
y_hat = torch.sigmoid(y_hat)
tp = (y_hat * y).sum(dim=(1, 2))
fn = ((1. - y_hat) * y).sum(dim=(1, 2))
fp = (y_hat * (1. - y)).sum(dim=(1, 2))
loss = loss - ((2 * tp) / (2 * tp + fp + fn + 1e-10)).sum() # fscore
return loss
So far I came up with following:到目前为止,我想出了以下内容:
def get_loss(y_hat, y):
loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y) # cross entropy (but no logits)
y_hat = tf.math.sigmoid(y_hat)
tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2])
fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2])
fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2])
loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10))) # fscore
return loss
so my questions boil down to:所以我的问题归结为:
torch.finfo()
do and how to express it in TensorFlow? torch.finfo()
做了什么以及如何在 TensorFlow 中表达它?y_hat.dtype
just return the data type? y_hat.dtype
是否只返回数据类型? .finfo()
provides a neat way to get machine limits for floating-point types. .finfo()
提供了一种获取浮点类型机器限制的简洁方法。 This function is available in Numpy , Torch as well as Tensorflow experimental .此 function 可用于Numpy 、 Torch以及Tensorflow 实验。
.finfo().max
returns the largest possible number representable as that dtype. .finfo().max
返回可表示为该 dtype 的最大可能数字。
NOTE: There is also a .iinfo()
for integer types.注意:还有一个.iinfo()
用于 integer 类型。
Here are a few examples of finfo
and iinfo
in action.以下是finfo
和iinfo
的一些示例。
print('FLOATS')
print('float16',torch.finfo(torch.float16).max)
print('float32',torch.finfo(torch.float32).max)
print('float64',torch.finfo(torch.float64).max)
print('')
print('INTEGERS')
print('int16',torch.iinfo(torch.int16).max)
print('int32',torch.iinfo(torch.int32).max)
print('int64',torch.iinfo(torch.int64).max)
FLOATS
float16 65504.0
float32 3.4028234663852886e+38
float64 1.7976931348623157e+308
INTEGERS
int16 32767
int32 2147483647
int64 9223372036854775807
If you want to implement this in tensorflow, you can use tf.experimental.numpy.finfo
to solve this.如果你想在 tensorflow 中实现这个,你可以使用tf.experimental.numpy.finfo
来解决这个问题。
print(tf.experimental.numpy.finfo(tf.float32))
print('Max ->',tf.experimental.numpy.finfo(tf.float32).max) #<---- THIS IS WHAT YOU WANT
Machine parameters for float32
---------------------------------------------------------------
precision = 6 resolution = 1.0000000e-06
machep = -23 eps = 1.1920929e-07
negep = -24 epsneg = 5.9604645e-08
minexp = -126 tiny = 1.1754944e-38
maxexp = 128 max = 3.4028235e+38
nexp = 8 min = -max
---------------------------------------------------------------
Max -> 3.4028235e+38
YES.是的。
In torch, it would return torch.float32
or something like that.在火炬中,它会返回torch.float32
或类似的东西。 In Tensorflow it would return tf.float32
or something like that.在 Tensorflow 中,它将返回tf.float32
或类似的东西。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.