简体   繁体   English

自定义损失 Function 错误:ValueError:没有为任何变量提供梯度(TensorFlow)

[英]Custom Loss Function Error: ValueError: No gradients provided for any variable (TensorFlow)

I am using a binary crossentropy model with non binary Y values & a sigmoid activation layer.我正在使用具有非二进制 Y 值和 sigmoid 激活层的二进制交叉熵 model。

I have created my first custom loss function but when I execute it I get the error "ValueError: No gradients provided for any variable: [....]"我已经创建了我的第一个自定义损失 function 但是当我执行它时,我收到错误“ValueError:没有为任何变量提供渐变:[....]”

This is my loss function.这是我的损失 function。 It is used for cryptocurrency prediction.它用于加密货币预测。 The y_true are the price change values and the y_pred values are rounded to 0/1 (sigmoid). y_true 是价格变化值,y_pred 值四舍五入为 0/1(sigmoid)。 It penalizes false positives with price_change * 3 and false negatives with price_change .它使用price_change * 3惩罚误报,使用price_change惩罚误报。 I know my loss function is not like the regular loss functions but I wouldn't know how to achieve this goal with those functions.我知道我的损失 function 不像常规的损失函数,但我不知道如何使用这些函数来实现这个目标。

def loss(y_true, y_pred):
    penalizer = 3
    loss_values = []
    y_true = y_true.numpy().tolist()
    y_pred = y_pred.numpy().tolist()
    for index in range(len(y_true)):
        pred = round(y_pred[index][0])
        lookup = y_true[index][0]
        if pred == 1:
            if lookup > 0:
                loss_values.append(0.0)
            else:
                loss_values.append(lookup * penalizer * -1)
        else:
            if lookup == 0:
                loss_values.append(0.0)
            elif lookup > 0:
                loss_values.append(lookup)
            else:
                loss_values.append(0.0)
    loss_values = K.constant(loss_values)
    return loss_values

And this is the tensor that the loss function returns.这就是损失 function 返回的张量。

tf.Tensor(
[ 0.          0.          0.          0.          2.76        0.
  2.16        3.75        0.          2.04        0.          0.03
  0.          0.          0.          2.8799999   1.41        0.
  0.          1.11        0.          2.79        1.2         0.
  0.69        1.92        0.          8.64        0.          6.2999997
  0.          0.          1.05        0.          4.08        0.84000003
  0.          0.          5.43        8.16        0.          0.
  0.6         3.87        0.          0.75        3.72        0.35999998
  1.74        8.07       13.92        1.74        4.41        0.
  1.23        0.          2.76        7.68        0.          0.63
  4.4700003   4.29        0.         10.59      ], shape=(64,), dtype=float32)

The error message:错误信息:

Traceback (most recent call last):
  File "/vserver/storages///packages//trader/classes/neuralnet/numbers/categorical.py", line 1356, in <module>
    neuralnet.train(refresh="--refresh" in sys.argv)
  File "/vserver/storages///packages//trader/classes/neuralnet/numbers/categorical.py", line 783, in train
    history = self.model.model.fit(
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1184, in fit
    tmp_logs = self.train_function(iterator)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/engine/training.py", line 853, in train_function
    return step_function(self, iterator)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/engine/training.py", line 842, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/home/administrator/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 1286, in run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
  File "/home/administrator/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 2849, in call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
  File "/home/administrator/venv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3632, in _call_for_each_replica
    return fn(*args, **kwargs)
  File "/home/administrator/venv/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 597, in wrapper
    return func(*args, **kwargs)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/engine/training.py", line 835, in run_step
    outputs = model.train_step(data)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/engine/training.py", line 791, in train_step
    self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/optimizer_v2/optimizer_v2.py", line 522, in minimize
    return self.apply_gradients(grads_and_vars, name=name)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/optimizer_v2/optimizer_v2.py", line 622, in apply_gradients
    grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
  File "/home/administrator/venv/lib/python3.8/site-packages/keras/optimizer_v2/utils.py", line 72, in filter_empty_gradients
    raise ValueError("No gradients provided for any variable: %s." %
ValueError: No gradients provided for any variable: ['conv1d/kernel:0', 'conv1d_1/kernel:0', 'conv1d_2/kernel:0', 'conv1d_3/kernel:0', 'lstm/lstm_cell/kernel:0', 'lstm/lstm_cell/recurrent_kernel:0', 'lstm/lstm_cell/bias:0', 'lstm_1/lstm_cell_1/kernel:0', 'lstm_1/lstm_cell_1/recurrent_kernel:0', 'lstm_1/lstm_cell_1/bias:0', 'lstm_2/lstm_cell_2/kernel:0', 'lstm_2/lstm_cell_2/recurrent_kernel:0', 'lstm_2/lstm_cell_2/bias:0', 'lstm_3/lstm_cell_3/kernel:0', 'lstm_3/lstm_cell_3/recurrent_kernel:0', 'lstm_3/lstm_cell_3/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0', 'dense_2/kernel:0', 'dense_2/bias:0'].

Any ideas how to fix this?任何想法如何解决这一问题?

Edit: New loss function.编辑:新损失 function。

def loss(y_true, y_pred):
    penalizer = 3
    batch_size = 64
    #loss_values = []
    loss_values = np.zeros(batch_size)
    y_true = y_true.numpy()
    y_pred = y_pred.numpy()
    for index in range(batch_size):
        pred = y_pred[index][0]
        #if pred >= 0.5:
        #   pred = 1
        #else:
        #   pred = 0
        #pred = tf.round(pred)
        differentiable_round = tf.maximum(pred - 0.499, 0)
        differentiable_round = differentiable_round * 10000
        pred = tf.minimum(differentiable_round, 1)
        lookup = y_true[index][0]
        if K.equal(pred, 1):
            if K.greater(lookup, 0):
                loss_values[index] = 0.0
            else:
                loss_values[index] = lookup * penalizer * -1
        else:
            if K.equal(lookup, 0):
                loss_values[index] = 0.0
            elif K.greater(lookup, 0):
                loss_values[index] = lookup
            else:
                loss_values[index] = 0.0
    loss_values = K.constant(loss_values)
    return loss_values

I think you are running into this issue because round is nondifferentiable.我认为您遇到了这个问题,因为round是不可微的。 You might want to look into differentiable round functions (which really are approximations of them).您可能想研究可微分的圆形函数(实际上是它们的近似值)。

I found the correct differentiable code for the loss function i wanted to use.我找到了我想使用的损失 function 的正确可微代码。

def loss(y_true, y_pred):
    y_true_onehot = tf.where(
        tf.greater(y_true, 0.0),
        1.0,
        0.0
    )
    loss_values = keras.losses.BinaryCrossentropy()(y_true_onehot, y_pred)
    mask = tf.where(
        tf.logical_or(
            tf.logical_and(tf.greater(y_true, 0.0), tf.greater_equal(y_pred, 0.5)),
            tf.logical_and(tf.less(y_true, 0.0), tf.less(y_pred, 0.5)),
        ),
        0.0,
        1.0
    )[:,0]
    loss_values = tf.multiply(loss_values, mask)
    return loss_values

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 自定义损失 Function 错误:ValueError:没有为任何变量提供梯度 - Custom Loss Function Error: ValueError: No gradients provided for any variable TensorFlow 2 自定义损失:“没有为任何变量提供梯度”错误 - TensorFlow 2 custom loss: "No gradients provided for any variable" error Tensorflow-Custom 函数:ValueError:没有为任何变量提供梯度 - Tensorflow-Custom Function: ValueError: No gradients provided for any variable Tensorflow:“错误:没有为任何变量提供梯度”,带有自定义损失 - Tensorflow: 'Error: No gradients provided for any variable' with custom loss TensorFlow 错误:ValueError:没有为任何变量提供梯度 - TensorFlow Error: ValueError: No gradients provided for any variable 没有为自定义损失的任何变量提供梯度 function - No gradients provided for any variable for custom loss function Tensorflow:实施新的损失函数将返回“ ValueError:没有为任何变量提供梯度” - Tensorflow: Implementing new loss function returns a “ValueError: No gradients provided for any variable” TensorFlow/Keras 中的错误:ValueError:没有为任何变量提供梯度 - Error in TensorFlow/Keras: ValueError: No gradients provided for any variable Tensorflow ValueError:没有为任何变量提供梯度: - Tensorflow ValueError: No gradients provided for any variable: Tensorflow ValueError:没有为任何变量提供梯度 - Tensorflow ValueError: No gradients provided for any variable
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM