繁体   English   中英

为什么带有一个 GRU 层的 model 返回零梯度?

[英]Why model with one GRU layer return zero gradients?

我正在尝试比较 2 个模型以了解渐变的行为。

import torch
import torch.nn as nn
import torchinfo

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()        
        self.Identity =  nn.Identity ()
        self.GRU      = nn.GRU(input_size=3, hidden_size=32, num_layers=2, batch_first=True)
        self.fc       = nn.Linear(32, 5)
        
    def forward(self, input_series):
                
        self.Identity(input_series)
        
        output, h = self.GRU(input_series)                
        output    = output[:,  -1, :]       # get last state                        
        output    = self.fc(output) 
        output    = output.view(-1, 5, 1)   # reorginize output        
                        
        return output
    
    
class SecondModel(nn.Module):
    def __init__(self):
        super(SecondModel, self).__init__()        
        self.GRU      = nn.GRU(input_size=3, hidden_size=32, num_layers=2, batch_first=True)        
        
    def forward(self, input_series):
                
        output, h = self.GRU(input_series)                                        
        return output

检查第一个 model 的梯度给出 True(零梯度):

model = MyModel()
x     = torch.rand([2, 10, 3])
y     = model(x)
y.retain_grad()  
y[:, -1].sum().backward()
print(torch.allclose(y.grad[:, :-1], torch.tensor(0.)))  # gradients w.r.t previous outputs are zeroes

检查第二个 model 的梯度给出 True(零梯度):

model = SecondModel()
x     = torch.rand([2, 10, 3])
y     = model(x)
y.retain_grad()  
y[:, -1].sum().backward()
print(torch.allclose(y.grad[:, :-1], torch.tensor(0.)))  # gradients w.r.t previous outputs are zeroes

根据这里的答案:

GRU 保存序列 output 顺序后的线性层?

第二个 model(只有 GRU 层)需要提供非零梯度。

  1. 我错过了什么?
  2. 我们什么时候会得到零或非零梯度?

y.grad[:, :-1]的值理论上不应该为零,但在这里它们是因为y[:, :-1]似乎并不指代用于计算y[:, -1]在 GRU 实现中。 例如,一个简单的 1 层 GRU 实现看起来像

import torch
import torch.nn as nn

class GRU(nn.Module):
    def __init__(self, input_size, hidden_size):
        super().__init__()
        self.lin_r = nn.Linear(input_size + hidden_size, hidden_size)
        self.lin_z = nn.Linear(input_size + hidden_size, hidden_size)
        self.lin_in = nn.Linear(input_size, hidden_size)
        self.lin_hn = nn.Linear(hidden_size, hidden_size)
        self.hidden_size = hidden_size

    def forward(self, x):
        bsz, len_, in_ = x.shape
        h = torch.zeros([bsz, self.hidden_size])
        hs = []
        for i in range(len_):
            r = self.lin_r(torch.cat([x[:, i], h], dim=-1)).sigmoid()
            z = self.lin_z(torch.cat([x[:, i], h], dim=-1)).sigmoid()
            n = (self.lin_in(x[:, i]) + r * self.lin_hn(h)).tanh()
            h = (1.-z)*n + z*h
            hs.append(h)

        # Return the output both as a single tensor and as a list of
        # tensors actually used in computing the hidden vectors
        return torch.stack(hs, dim=1), hs

然后,我们有

model = GRU(input_size=3, hidden_size=32)
x = torch.rand([2, 10, 3])
y, hs = model(x)
y.retain_grad()
for h in hs:
    h.retain_grad()
y[:, -1].sum().backward()
print(torch.allclose(y.grad[:, -1], torch.tensor(0.)))  # False, as expected (sanity check)
print(torch.allclose(y.grad[:, :-1], torch.tensor(0.)))  # True, unexpected
print(any(torch.allclose(h.grad, torch.tensor(0.)) for h in hs))  # False, as expected

看起来 PyTorch 计算了梯度 w.r.t hs中的所有张量,但不是那些 w.r.t y

所以,回答你的问题:

  1. 我不认为你错过任何东西。 链接的答案不太正确,因为它错误地假设 PyTorch 会按预期计算y.grad
  2. 链接答案中作为评论给出的理论仍然是正确的,但并不十分完整:如果输入无关紧要,则梯度始终为零。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM