简体   繁体   English

在 TF 2.0 中使用 Keras 自定义损失函数

[英]Custom Loss Function with Keras in TF 2.0

Custom Loss Function in Tensorflow 2.0 Alpha from TF 1.13 TF 1.13 中 Tensorflow 2.0 Alpha 中的自定义损失函数

I'm trying to use the roc_auc loss function from this library in the model.compile() of TF 2.0.我正在尝试在 TF 2.0 的model.compile()中使用此中的model.compile()损失函数。 While my implementation does not though errors the loss and accuracies don't move.虽然我的实现没有错误,但损失和准确性不会移动。

I have first converted the 1.0 TF code to 2.0 using the code suggested by Google.我首先使用 Google 建议的代码将 1.0 TF 代码转换为 2.0。

Then I imported the function form the library and used in the following way:然后我从库中导入该函数并按以下方式使用:

model.compile(optimizer='adam',
              loss=roc_auc_loss,
              metrics=['accuracy',acc0, acc1, acc2, acc3, acc4])
Epoch 17/100
100/100 [==============================] - 20s 197ms/step - loss: 469.7043 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00 - val_loss: 152.2152 - val_accuracy: 0.0000e+00 - val_acc0: 0.0000e+00 - val_acc1: 0.0000e+00 - val_acc2: 0.0000e+00 - val_acc3: 0.0000e+00 - val_acc4: 0.0000e+00
Epoch 18/100
100/100 [==============================] - 20s 198ms/step - loss: 472.0472 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00 - val_loss: 152.2152 - val_accuracy: 0.0000e+00 - val_acc0: 0.0000e+00 - val_acc1: 0.0000e+00 - val_acc2: 0.0000e+00 - val_acc3: 0.0000e+00 - val_acc4: 0.0000e+00
Epoch 19/100
 78/100 [======================>.......] - ETA: 4s - loss: 467.4657 - accuracy: 0.0000e+00 - acc0: 0.0000e+00 - acc1: 0.0000e+00 - acc2: 0.0000e+00 - acc3: 0.0000e+00 - acc4: 0.0000e+00

I would like to understand what is wrong with Keras in TF 2.0 its obviously not backpropagating.我想了解 TF 2.0 中的 Keras 有什么问题,它显然不是反向传播。 Thanks.谢谢。

@ruben Can you share a standalone code to reproduce the issue? @ruben 你能分享一个独立的代码来重现这个问题吗? I think we need to check function definitions.我认为我们需要检查函数定义。 Did you add @tf.function() on top of the function definitions?您是否在函数定义之上添加了 @tf.function()? Thanks!谢谢!

Please check the following example (simple example from TF website)请查看以下示例(来自 TF 网站的简单示例)

!pip install tensorflow==2.0.0-beta1

import tensorflow as tf
from tensorflow import keras 
import keras.backend as K

# load mnist data
mnist=tf.keras.datasets.mnist
(x_train,y_train),(x_test,y_test)=mnist.load_data()
x_train,x_test=x_train/255.0,x_test/255.0

# Custom Metric1 (for example)
@tf.function()
def customMetric1(yTrue,yPred):
    return tf.reduce_mean(yTrue-yPred)

# Custom Metric2 (for example)
@tf.function()
def customMetric2(yTrue, yPred):
  return tf.reduce_mean(tf.square(tf.subtract(yTrue,yPred)))

model=tf.keras.models.Sequential([
                                  tf.keras.layers.Flatten(input_shape=(28,28)),
                                  tf.keras.layers.Dense(128, activation='relu'),
                                  tf.keras.layers.Dropout(0.2),
                                  tf.keras.layers.Dense(10,activation='softmax')
])

# Compile the model with custom loss functions
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy', customMetric1, customMetric2])

# Fit and evaluate model
model.fit(x_train,y_train,epochs=5)
model.evaluate(x_test,y_test)

Output输出

WARNING: Logging before flag parsing goes to stderr.警告:标志解析前的日志记录进入 stderr。 W0711 23:57:16.453042 139687207184256 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. W0711 23:57:16.453042 139687207184256 deprecation.py:323] 来自 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_from.tensorflow.wrapper .ops.array_ops) 已弃用,并将在未来版本中删除。 Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where Train on 60000 samples更新说明:使用2.0中的tf.where,广播规则与np.where Train on 60000个样本相同

Epoch 1/5 60000/60000 [==============================] - 5s 87us/sample - loss: 0.2983 - accuracy: 0.9133 - customMetric1: 4.3539 - customMetric2: 27.3769 Epoch 2/5 60000/60000 [==============================] - 5s 83us/sample - loss: 0.1456 - accuracy: 0.9555 - customMetric1: 4.3539 - customMetric2: 27.3860 Epoch 1/5 60000/60000 [==============================] - 5s 87us/sample - 损失:0.2983 - 准确度: 0.9133 - customMetric1: 4.3539 - customMetric2: 27.3769 Epoch 2/5 60000/60000 [============================] - 5s 83us/sample - 损失:0.1456 - 准确度:0.9555 - customMetric1:4.3539 - customMetric2:27.3860

Epoch 3/5 60000/60000 [==============================] - 5s 82us/sample - loss: 0.1095 - accuracy: 0.9663 - customMetric1: 4.3539 - customMetric2: 27.3881 Epoch 3/5 60000/60000 [==============================] - 5s 82us/sample - 损失:0.1095 - 准确度: 0.9663 - customMetric1: 4.3539 - customMetric2: 27.3881

Epoch 4/5 60000/60000 [==============================] - 5s 83us/sample - loss: 0.0891 - accuracy: 0.9717 - customMetric1: 4.3539 - customMetric2: 27.3893 Epoch 4/5 60000/60000 [==============================] - 5s 83us/sample - 损失:0.0891 - 准确度: 0.9717 - customMetric1: 4.3539 - customMetric2: 27.3893

Epoch 5/5 60000/60000 [==============================] - 5s 87us/sample - loss: 0.0745 - accuracy: 0.9765 - customMetric1: 4.3539 - customMetric2: 27.3901 Epoch 5/5 60000/60000 [==============================] - 5s 87us/sample - 损失:0.0745 - 准确度: 0.9765 - customMetric1: 4.3539 - customMetric2: 27.3901

10000/10000 [==============================] - 0s 46us/sample - loss: 0.0764 - accuracy: 0.9775 - customMetric1: 4.3429 - customMetric2: 27.3301 [0.07644735965565778, 0.9775, 4.342905, 27.330126] 10000/10000 [==============================] - 0s 46us/样本 - 损失:0.0764 - 准确度:0.9775 - customMetric1 : 4.3429 - customMetric2: 27.3301 [0.07644735965565778, 0.9775, 4.342905, 27.330126]

Edit 1编辑 1

Say for example, if you want to use customMetric1 as a custom loss function, then change couple of things as follows and run the code.例如,如果您想使用customMetric1作为自定义损失函数,请按如下方式更改几项并运行代码。 Hope this helps.希望这可以帮助。

# Custom Metric1 (for example)
@tf.function()
def customMetric1(yTrue,yPred,name='CustomLoss1'):
  yTrue = tf.dtypes.cast(yTrue, tf.float32)
  return tf.reduce_mean(yTrue-yPred)

model.compile(optimizer='adam',loss=customMetric1, metrics=['accuracy', customMetric2])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM