简体   繁体   English

Tensorflow中的自定义渐变操作的grad参数是否始终是一个矩阵?

[英]Is the grad parameter of custom gradient ops in Tensorflow always a matrix of ones?

As far as I am concerned there are two ways of creating custom ops like follows: 就我而言,有两种创建自定义操作的方法,如下所示:

@tf.RegisterGradient("CustomGrad")
    def _custom_grad(op, grad):
        return grad

and

@function.Defun(tf.float32, tf.float32)
    def bprop(op, grad):
        return grad

@function.Defun(tf.float32, grad_func=bprop)
    def fprop(W):
        W = tf.sign(W)
        return W

I appears to me that regardless of which type of computation for the forward prop the grad parameter of the custom gradient op is always a matrix of ones. 在我看来,无论前向支撑的计算类型如何,自定义渐变op的grad参数始终是一个矩阵。 I think it kinda makes sense because you can just use the custom gradient op to let the gradient pass through. 我认为这很有意义,因为您可以使用自定义渐变操作让渐变通过。 However, I need confirmation. 但是,我需要确认。

Can someone confirm or correct? 有人可以确认或纠正吗?

grad is the input gradient from the next instance in the graph after your op with the custom gradient. grad是您使用自定义渐变进行操作后,图形中下一个实例的输入渐变。 It is not affected by the forward pass of your op. 它不受您的操作的正向传递的影响。 If you change what happens after the op the grad can change. 如果您更改操作之后发生的情况, grad可能会更改。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM