[英]Whie training I am getting the following error. “ ValueError: No gradients provided for any variable”
I calculated gradients but still I am getting no gradient error.I am not able to figure out where I am missing even after looking at answers to similar type of errors.我计算了梯度,但我仍然没有得到梯度错误。即使在查看了类似类型错误的答案后,我也无法弄清楚我错过了哪里。 I am using tensorflow 2. The link to my code is https://github.com/Gadamsetty/ML_practise/blob/master/translatron_test.py
我正在使用 tensorflow 2。我的代码的链接是https://github.com/Gadamsetty/ML_practise/blob/master/translatron_test.py
the error is as follows错误如下
ValueError: in converted code:
<ipython-input-15-031ef53603dd>:92 train_step *
optimizer.apply_gradients(zip(gradients,variables))
/tensorflow-2.0.0/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py:427 apply_gradients
grads_and_vars = _filter_grads(grads_and_vars)
/tensorflow-2.0.0/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py:1025 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['encoder_rnn_3/layer0/forward_gru_12/kernel:0', 'encoder_rnn_3/layer0/forward_gru_12/recurrent_kernel:0', 'encoder_rnn_3/layer0/forward_gru_12/bias:0', 'encoder_rnn_3/layer0/backward_gru_12/kernel:0', 'encoder_rnn_3/layer0/backward_gru_12/recurrent_kernel:0', 'encoder_rnn_3/layer0/backward_gru_12/bias:0', 'encoder_rnn_3/layer1/forward_gru_13/kernel:0', 'encoder_rnn_3/layer1/forward_gru_13/recurrent_kernel:0', 'encoder_rnn_3/layer1/forward_gru_13/bias:0', 'encoder_rnn_3/layer1/backward_gru_13/kernel:0', 'encoder_rnn_3/layer1/backward_gru_13/recurrent_kernel:0', 'encoder_rnn_3/layer1/backward_gru_13/bias:0', 'encoder_rnn_3/layer2/forward_gru_14/kernel:0', 'encoder_rnn_3/layer2/forward_gru_14/recurrent_kernel:0', 'encoder_rnn_3/layer2/forward_gru_14/bias:0', 'encoder_rnn_3/layer2/backward_gru_14/kernel:0', 'encoder_rnn_3/layer2/backward_gru_14/recurrent_kernel:0', 'encoder_rnn_3/layer2/backward_gru_14/bias:0', 'encoder_rnn_3/layer3/forward_gru_15/kernel:0', 'encoder_rnn_3/layer3/forward_gru_15/recurrent_kernel:0', 'encoder_rnn_3/layer3/forward_gru_15/bias:0', 'encoder_rnn_3/layer3/backward_gru_15/kernel:0', 'encoder_rnn_3/layer3/backward_gru_15/recurrent_kernel:0', 'encoder_rnn_3/layer3/backward_gru_15/bias:0', 'decoder_rnn_3/bidirectional_6/forward_decoder_layer1/kernel:0', 'decoder_rnn_3/bidirectional_6/forward_decoder_layer1/recurrent_kernel:0', 'decoder_rnn_3/bidirectional_6/forward_decoder_layer1/bias:0', 'decoder_...
Since you are not doing any operation on the gradients, you can try to use directly optimizer.minimize(loss)
rather than first computing gradients and then applying them separately.由于您没有对梯度进行任何操作,因此您可以尝试直接使用
optimizer.minimize(loss)
而不是先计算梯度然后单独应用它们。 (lines 159-161) (第 159-161 行)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.