[英]Conversion from tf.gradients() to tf.GradientTape() returns None
I'm migrating some TF1 code to TF2.我正在将一些 TF1 代码迁移到 TF2。 For full code, you may check here lines [155-176].
有关完整代码,您可以在此处查看第 [155-176] 行。 There is a line in TF1 that gets gradients given a loss (float value) and a (m, n) tensor
TF1 中有一条线在给定损失(浮点值)和 (m, n) 张量的情况下获得梯度
Edit: the problem persists编辑:问题仍然存在
Note: the TF2 code should be compatible and should work inside a tf.function
注意: TF2 代码应该兼容并且应该在
tf.function
工作
g = tf.gradients(-loss, f) # loss being a float and f being a (m, n) tensor
k = -f_pol / (f + eps) # f_pol another (m, n) tensor and eps a float
k_dot_g = tf.reduce_sum(k * g, axis=-1)
adj = tf.maximum(
0.0,
(tf.reduce_sum(k * g, axis=-1) - delta)
/ (tf.reduce_sum(tf.square(k), axis=-1) + eps),
)
g = g - tf.reshape(adj, [nenvs * nsteps, 1]) * k
grads_f = -g / (nenvs * nsteps)
grads_policy = tf.gradients(f, params, grads_f) # params being the model parameters
In TF2 code I'm trying:在 TF2 代码中,我正在尝试:
with tf.GradientTape() as tape:
f = calculate_f()
f_pol = calculate_f_pol()
others = do_further_calculations()
loss = calculate_loss()
g = tape.gradient(-loss, f)
However I keep getting g = [None]
whether I use tape.watch(f)
or create a tf.Variable
with the value of f
or even use tf.gradients()
inside a tf.function
because otherwise, it will complain.但是,无论我使用
tape.watch(f)
还是创建一个值为f
的tf.Variable
,甚至在tf.function
中使用tf.gradients()
,我都会不断得到g = [None]
,否则它会抱怨。
It is very possible to be one of below cases很可能是以下情况之一
tf.Variable
inside a function decorated by @tf.funtion
?@tf.funtion
tf.Variable
的 function 中区分 tf.Variable 吗?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.