简体   繁体   English

基于中间层输出的 Keras 度量

[英]Keras metric based on output of an intermediate layer

Problem: I want to monitor my model better during training.问题:我想在训练期间更好地监控我的模型。 Because in some cases the loss suddenly turn to nan during training, and I want to know what the model is doing when this happens.因为在某些情况下,损失在训练过程中突然变成了 nan,我想知道模型在发生这种情况时在做什么。 Besides that, I want to see if a certain layer obeys a specific condition (rows and columns should sum to one).除此之外,我想看看某个层是否遵守特定条件(行和列应为一)。

Approach: Defining a custom metric wont help, since this one will only work on y_pred and y_true .方法:定义自定义指标无济于事,因为这个指标仅适用于y_predy_true Maybe there is some complex solution with building a model within a model, and trying to somehow calculate a metric on the output of the intermediate-model layer.也许在模型中构建模型有一些复杂的解决方案,并试图以某种方式计算中间模型层输出的度量。 But that feels a bit too complex.但这感觉有点太复杂了。

Solution: The only thing I could think of otherwise is to switch to Tensorflow itself, so that I have more control over the training process.解决方案:我唯一能想到的就是切换到 Tensorflow 本身,这样我就可以更好地控制训练过程。 Any other ideas?还有其他想法吗?

There are several ways to do this without the need to construct a callback, depending on how you add your losses.有几种方法可以在不需要构造回调的情况下执行此操作,具体取决于您如何添加损失。

If you add the loss with model.add_loss, you need to display it through a workaround by adding the metric after the compile step (as discussed here )如果添加与model.add_loss的损失,您需要通过编译步骤后加入指标,通过一种变通方法,以显示它(如讨论这里

This results in something like this (specifically for a VAE, one is interested in kl_loss which depends on the intermediate layer):这会导致类似这样的事情(特别是对于 VAE,人们对依赖于中间层的 kl_loss 感兴趣):

reconstruction_loss = mse(K.flatten(inputs), K.flatten(outputs))
kl_loss = beta*K.mean(- 0.5 * 1/latent_dim * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1))

model.add_loss(reconstruction_loss)
model.add_loss(kl_loss)
model.compile(optimizer='adam')

model.metrics_tensors.append(kl_loss)
model.metrics_names.append("kl_loss")

model.metrics_tensors.append(reconstruction_loss)
model.metrics_names.append("mse_loss")

For me this gives an output like this:对我来说,这给出了这样的输出:

Epoch 1/1
252/252 [==============================] - 23s 92ms/step - loss: 0.4336 - kl_loss: 0.0823 - mse_loss: 0.3513 - val_loss: 0.2624 - val_kl_loss: 0.0436 - val_mse_loss: 0.2188

If you don't use model.add_loss but pass your losses directly in the compiler, than you need to define a custom metric (similar to custom loss metric ) and pass the metric to the compile step.如果您不使用 model.add_loss 而是直接在编译器中传递您的损失,那么您需要定义一个自定义指标(类似于自定义损失指标)并将该指标传递给编译步骤。 In the case above:在上述情况下:

def customMetric(kl_loss):

    def klLoss(y_true,y_pred):    
        return kl_loss

    return klLoss

model.compile(..., metrics=[customMetric(kl_loss)])

The model.metrics_tensors.append does not work in TensorFlow 2.x model.metrics_tensors.appendTensorFlow 2.x不起作用

So if you're using the add_loss method, you can also use the model.add_metric method in Keras / TensorFlow 2.x .所以如果你使用add_loss方法,你也可以使用model.add_metric Keras / TensorFlow 2.xmodel.add_metric方法。

For example, if we want to track the KL loss from the z_mean and z_log_var (output of an intermediate layer) in VAE we can do it this way:例如,如果我们想跟踪 VAE 中z_meanz_log_var (中间层的输出)的KL loss ,我们可以这样做:

kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var) 
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5

Then,然后,

model.add_loss(vae_loss) 
model.add_metric(kl_loss,name='kl_loss')    
model.add_metric(reconstruction_loss,name='reconstruction_loss') 
model.compile(optimizer='adam')

Then,然后,

Epoch 1/50
469/469 [==============================] - 3s 6ms/step - loss: 51.4340 - kl_loss: 4.5296 - reconstruction_loss: 46.9097 - val_loss: 42.0644 - val_kl_loss: 6.0029 - val_reconstruction_loss: 36.0615

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM