简体   繁体   English

如何在训练期间修改每个时期的损失函数内的变量?

[英]How to modify a variable inside the loss function in each epoch during training?

I have a custom loss function.我有一个自定义损失函数。 In each epoch I would like to either keep or throw away my input matrix randomly:在每个时代,我想随机保留或丢弃我的输入矩阵:

import random
from tensorflow.python.keras import backend
def decision(probability):
     return random.random() < probability

def my_throw_loss_in1(y_true, y_pred):
     if decision(probability=0.5):
         keep_mask = tf.ones_like(in1)
         total_loss = backend.mean(backend.square(y_true- y_pred)) * keep_mask
         print('Input1 is kept')
     else:
         throw_mask = tf.zeros_like(in1)
         total_loss =  backend.mean(backend.square(y_true- y_pred)) * throw_mask
         print('Input1 is thrown away')
     return total_loss


model.compile(loss= [ my_throw_loss_in1], 
          optimizer='Adam', 
          metrics=['mae'])

history2 = model.fit([x, y], batch_size=10, epochs=150, validation_split=0.2, shuffle=True)

but this would only set the decision value once and doesn't compile the loss in each epoch.但这只会设置决策值一次,并且不会编译每个时期的损失。 How do I write a loss function that its variable can be modified in each epoch?如何编写一个可以在每个时期修改其变量的损失函数?

Here some thoughts:这里有一些想法:

  1. My first guess is to write a callback to pass an argument to the loss function but I did not succeed so far, basically it is not clear for me when I return a value from a callback then how can I pass that value to the loss function?我的第一个猜测是编写一个回调来将参数传递给损失函数,但到目前为止我没有成功,基本上我不清楚当我从回调中返回一个值时,我该如何将该值传递给损失函数?

OR或者

  1. The other way around would be to write the loss function in a callback but then what do I pass to the callback as argument?另一种方法是在回调中编写损失函数,但是我将什么作为参数传递给回调? and how do I compile a model with a loss function in a callback?以及如何在回调中使用损失函数编译模型?

The loss function is based on this post .损失函数基于这篇文章

Just change your loss function as follows in order for it to be evaluated when fit(*) is called:只需按如下方式更改您的损失函数,以便在调用fit(*)时对其进行评估:

def my_throw_loss_in1(y_true, y_pred):

  probability = 0.5
  random_uniform = tf.random.uniform(shape=[], minval=0., maxval=1., dtype=tf.float32)
  condition = tf.less(random_uniform, probability)
  mask = tf.cond(condition, lambda: tf.ones_like(y_true), lambda: tf.zeros_like(y_true))

  total_loss = tf.keras.backend.mean(tf.keras.backend.square(y_true - y_pred)* mask) 
  tf.print(mask)
  return total_loss

First, a random number is generated and then a condition (random number less than the probability you defined) is created based on this number.首先生成一个随机数,然后根据这个数字创建一个条件(小于你定义的概率的随机数)。 Afterwards, you just use tf.cond to return tf.ones_like if your condition is True , otherwise tf.zeros_like .之后,如果您的条件为True ,则只需使用tf.cond返回tf.ones_like ,否则tf.zeros_like Finally, the mask is simply applied to your loss.最后,面具只是简单地应用于你的损失。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在 Keras 训练期间如何在损失函数内打印? - How do I print inside the loss function during training in Keras? 如何在 Keras 的自定义批量训练中获得每个时期的损失? - How to get the loss for each epoch in custom batch training in Keras? 为训练和测试数据集绘制每个时期的损失和准确度 - Plot loss and accuracy over each epoch for both training and test datasets 如何在训练期间替换损失函数 tensorflow.keras - How to replace loss function during training tensorflow.keras 如何在训练过程中处理 numpy arrays function - how to work on numpy arrays during training for custom loss function 为什么我的损失函数随着每个 epoch 增加? - Why is my loss function increasing with each epoch? 在训练期间如何在每个 epoch 结束时调用测试集? 我正在使用张量流 - How can I call a test set at the end of each epoch during the training? I am using tensorflow tensorflow-keras 如何计算每个 epoch 的训练成本? - How does tensorflow-keras calculate the cost during training in each epoch? 是否可以在训练期间动态更改损失函数? - Is it possible to change the loss function dynamically during training? 训练期间的 Tensorflow 自定义损失函数 NaN - Tensorflow custom loss function NaNs during training
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM