简体   繁体   English

L2矩阵逐行归一化梯度

[英]L2 matrix rowwise normalization gradient

Im trying to implement L2 norm layer for convolutional neural network, and im stuck on backward pass: 我试图为卷积神经网络实现L2范数层,并且陷入了向后的障碍:

def forward(self, inputs):
    x, = inputs
    self._norm = np.expand_dims(np.linalg.norm(x, ord=2, axis=1), axis=1)
    z = np.divide(x, self._norm)
    return z,

def backward(self, inputs, grad_outputs):
    x, = inputs
    gz, = grad_outputs
    gx = None # how to compute gradient here?
    return gx,

How to calculate gx? 如何计算gx? My first guess was 我的第一个猜测是

gx = - gz * x / self._norm**2

But this one seems is wrong. 但这似乎是错误的。

正确的答案是

gx = np.divide(gz, self._norm)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM