简体   繁体   English

在 Keras 中输出由 add_loss 添加的多个损失

[英]Output multiple losses added by add_loss in Keras

I've investigated the Keras example for custom loss layer demonstrated by a Variational Autoencoder (VAE) .我研究了由变分自动编码器 (VAE) 演示的自定义损失层的 Keras 示例。 They have only one loss-layer in the example while the VAE's objective consists out of two different parts: Reconstruction and KL-Divergence.他们在示例中只有一个损失层,而 VAE 的目标由两个不同的部分组成:重建和 KL-Divergence。 However, I'd like to plot/visualize how these two parts evolve during training and split the single custom loss into two loss-layer:但是,我想绘制/可视化这两个部分在训练期间如何演变并将单个自定义损失分成两个损失层:

Keras Example Model: Keras 示例模型:

在此处输入图片说明

My Model:我的型号:

在此处输入图片说明

Unfortunately, Keras just outputs one single loss value in the for my multi-loss example as can be seen in my Jupyter Notebook example where I've implemented both approaches.不幸的是,Keras 只在我的多重损失示例中输出一个单一的损失值,如 我的 Jupyter Notebook 示例中所示,我已经实现了这两种方法。 Does someone know how to get the values per loss which were added by add_loss ?有人知道如何获得由add_loss添加的每个损失的值吗? And additionally, how does Keras calculate the single loss value, given multiple add_loss calls (Mean/Sum/...?)?此外,在给定多个add_loss调用(Mean/Sum/...?)的情况下, add_loss如何计算单个损失值?

I'm using the version 2.2.4-tf of Keras and the solution above didn't work for me.我正在使用 Keras 的 2.2.4-tf 版本,上面的解决方案对我不起作用。 Here is the solution I found (to continue the example of dumkar ):这是我找到的解决方案(继续dumkar例子):

reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))
kl_loss = beta*K.mean(- 0.5 * 1/latent_dim * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1))

model.add_loss(reconstruction_loss)
model.add_loss(kl_loss)

model.add_metric(kl_loss, name='kl_loss', aggregation='mean')
model.add_metric(reconstruction_loss, name='mse_loss', aggregation='mean')

model.compile(optimizer='adam')

Hope it will help you.希望它会帮助你。

This is indeed not supported, and currently discussed on different places on the web.这确实不受支持,目前在网络上的不同地方都有讨论。 The solution can be obtained by adding your losses again as a separate metric after the compile step (also discussed here )可以通过在编译步骤之后再次添加损失作为单独的指标来获得解决方案(也在此处讨论)

This results in something like this (specifically for a VAE):这会导致类似这样的事情(特别是对于 VAE):

reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))
kl_loss = beta*K.mean(- 0.5 * 1/latent_dim * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1))

model.add_loss(reconstruction_loss)
model.add_loss(kl_loss)
model.compile(optimizer='adam')

model.metrics_tensors.append(kl_loss)
model.metrics_names.append("kl_loss")

model.metrics_tensors.append(reconstruction_loss)
model.metrics_names.append("mse_loss")

For me this gives an output like this:对我来说,这给出了这样的输出:

Epoch 1/1
252/252 [==============================] - 23s 92ms/step - loss: 0.4336 - kl_loss: 0.0823 - mse_loss: 0.3513 - val_loss: 0.2624 - val_kl_loss: 0.0436 - val_mse_loss: 0.2188

It turns out that the answer is not straight forward and furthermore, Keras does not support this feature out of the box.事实证明,答案并不简单,而且 Keras 不支持开箱即用的功能。 However, I've implemented a solution where each loss-layer outputs the loss and a customized callback function records it after every epoch.但是,我已经实现了一个解决方案,其中每个损失层都输出损失,并且自定义的回调函数会在每个 epoch 之后记录它。 The solution for my multi-headed example can be found here: https://gist.github.com/tik0/7c03ad11580ae0d69c326ac70b88f395我的多头示例的解决方案可以在这里找到: https : //gist.github.com/tik0/7c03ad11580ae0d69c326ac70b88f395

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM