简体   繁体   English

在使用中间输出的多输出模型中结合 add_loss 和 keras.losses

[英]Combining add_loss with keras.losses in multioutput models using intermediate outputs

Previously in another post ( Keras multioutput custom loss with intermediate layers output ) I discussed the problem I was having.之前在另一篇文章中( Keras multioutput custom loss with intermediate layers output )我讨论了我遇到的问题。 Finally, this problem was fixed in this way:最后,通过这种方式解决了这个问题:

def MyLoss(true1, true2, out1, out2, out3):
    loss1 = tf.keras.losses.someloss1(out1, true1)
    loss2 = tf.keras.losses.someloss2(out2, true2)
    loss3 = tf.keras.losses.someloss3(out2, out3)
    loss = loss1 + loss2 + loss3
    return loss

input1 = Input(shape=input1_shape)
input2 = Input(shape=input2_shape)

# do not take into account the notation, only the idea
output1 = Submodel1()([input1,input2]) 
output2 = Submodel2()(output1)
output3 = Sumbodel3()(output1)

true1 = Input(shape=true1shape)
true2 = Input(shape=true2shape)

model = Model([input1,input2,true1,true2], [output1,output2,output3])
model.add_loss(MyLoss(true1, true2, output1, output2, output3))
model.compile(optimizer='adam', loss=None)

model.fit(x=[input1 ,input2 ,true1,true2], y=None, epochs=n_epochs)

In that problem, all the losses I used were keras losses (ie tf.keras.losss.someloss ) but now I want to add a couple more losses and I want to combine custom losses with keras losses.在那个问题中,我使用的所有损失都是 keras 损失(即tf.keras.losss.someloss )但现在我想添加更多损失,我想将自定义损失与 Z063009BB15C8272BD0C701ZDF0 loss 结合起来That is, now I have this scheme:也就是说,现在我有这个方案:

在此处输入图像描述

For adding these two losses, which are SSIM losses, I have tried this:为了添加这两个损失,即SSIM损失,我尝试了这个:

def SSIMLoss(y_true, y_pred):
    return 1-tf.reduce_mean(tf.image.ssim(y_true, y_pred, 1.0))

def MyLoss(true1, true2, out1, out2, out3):
    loss1 = tf.keras.losses.someloss1(out1, true1)
    customloss1 = SSIMLoss(out1,true1)
    loss2 = tf.keras.losses.someloss2(out2, true2)
    loss3 = tf.keras.losses.someloss3(out2, out3)
    customloss2 = SSIMLoss(out2,out3)
    loss = loss1 + loss2 + loss3 + customloss1 + customloss2
    return loss

But I get this error:但我得到这个错误:

OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

I have tried decorating the function with @tf.function but I get this error:我尝试用@tf.function装饰 function 但我收到此错误:

_SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'input_43:0' shape=(None, 128, 128, 1) dtype=float32>, <tf.Tensor 'conv2d_109/Sigmoid:0' shape=(None, 128, 128, 1) dtype=float32>]

I have found this ( https://github.com/tensorflow/tensorflow/issues/32127 ) about combining keras losses with add_loss , maybe this is the problem, but I don´t know how to fix it.我发现这个( https://github.com/tensorflow/tensorflow/issues/32127 )关于将 keras 损失与add_loss结合起来,也许这是问题所在,但我不知道如何解决它。

I was able to reproduce your above errors in TF 2.3 .我能够在TF 2.3中重现您的上述错误。 But in TF 2.4 and nightly TF 2.6 , there was no such issue , but when I tried to plot the model I got another error, though no issue with the model. summary()但是在TF 2.4和 nightly TF 2.6中,没有这样的问题,但是当我尝试plot model我得到另一个错误,虽然model. summary()没有问题model. summary() and also training with .fit . model. summary()以及使用.fit进行培训。 However, if the eager mode is disabled, then there wouldn't be an issue with TF 2.3 / 2.4 .但是,如果禁用了 Eager 模式,则TF 2.3 / 2.4不会有问题。


Details细节

In TF 2.3 , I can reproduce your issue same shown below.TF 2.3中,我可以重现您的问题,如下所示。 To resolve this, just disable the eager mode showed above.要解决这个问题,只需禁用上面显示的急切模式

In TF 2.4 / TF Nightly 2.6 , I didn't need to disable the eager mode.TF 2.4 / TF Nightly 2.6中,我不需要禁用 Eager 模式。 The model was compiled fine and train as expected. model 编译良好并按预期进行训练。 But the only issue occurs when I tried to plot the model, it gave the following error但是当我尝试 plot model 时出现唯一的问题,它给出了以下错误

tf.keras.utils.plot_model(model)
....
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no 
attribute '_keras_history'

This issue caused for 1-.. expression in the SSIMLoss method;此问题是由于SSIMLoss方法中的1-..表达式引起的; something similar . 类似的东西。 But again, by disabling the eager mode , it resolves anyway.但同样,通过禁用急切模式,它无论如何都会解决。 However, in general, it's better to upgrade to TF 2.4 .但是,一般来说,最好升级到TF 2.4


Code Examples代码示例

Here I will show you a dummy example that probably similar to your training pipelines.在这里,我将向您展示一个可能类似于您的训练管道的虚拟示例。 In this example, we have one input ( 28, 28, 3 ) and three outputs ( 28, 28, 3 ).在这个例子中,我们有一个输入( 28, 28, 3 )和三个输出( 28, 28, 3 )。

from tensorflow.keras.layers import *
from tensorflow.keras import Model 
import tensorflow as tf 
import numpy as np

# tf.compat.v1.disable_eager_execution()
print(tf.__version__)
print(tf.executing_eagerly())
2.4.1
True

Custom loss functions.自定义损失函数。

def SSIMLoss(y_true, y_pred):
    return 1 - tf.reduce_mean(tf.image.ssim(y_true, y_pred, 1.0))

def MyLoss(true1, true2, out1, out2, out3):
    loss1 = tf.keras.losses.cosine_similarity(out1, true1)
    loss2 = tf.keras.losses.cosine_similarity(out2, true2)
    loss3 = tf.keras.losses.cosine_similarity(out2, out3)
    customloss1 = SSIMLoss(true1, out1)
    customloss2 = SSIMLoss(out2, out3)

    loss = loss1 + loss2 + loss3 + customloss1 + customloss2
    return loss

Data数据

imgA = tf.random.uniform([10, 28, 28, 3], minval=0, maxval=256)
tarA = np.random.randn(10, 28, 28, 3)
tarB = np.random.randn(10, 28, 28, 3)

Model Model

A model with one input and three outputs.一个 model 具有一个输入和三个输出。

input  = Input(shape=(28, 28, 3))
middle = Conv2D(16, kernel_size=(3,3), padding='same')(input)

outputA = Dense(3, activation='relu')(middle)
outputB = Dense(3, activation='selu')(middle)
outputC = Dense(3, activation='elu')(middle)

target_inputA = Input(shape=(28, 28, 3))
target_inputB = Input(shape=(28, 28, 3))

model = Model([input, target_inputA, target_inputB], 
              [outputA, outputB, outputC])

model.add_loss(MyLoss(target_inputA, target_inputB, 
                      outputA, outputB, outputC))

# tf.keras.utils.plot_model(model) # disable eager mode 
model.summary()

Compile and Run编译并运行

model.compile(optimizer='adam', loss=None)
model.fit([imgA, tarA, tarB], steps_per_epoch=5)

5/5 [==============================] - 2s 20ms/step - loss: 1.4338
<tensorflow.python.keras.callbacks.History at 0x7efde188d450>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM