簡體   English   中英

如何在 PyTorch 神經網絡中正確定義權重和偏差以獲得正確形狀的 output?

[英]How to correctly define weights and biases in a PyTorch neural network to get output of correct shape?

我正在嘗試將一些形狀為[70,1][70,1,1][70,1]的數據傳遞到我已為其分配權重和偏差的線性層神經網絡中。 我期待一個形狀為 [70,1] 的 output,但我不斷收到以下錯誤:

RuntimeError                              Traceback (most recent call last)
Input In [78], in <cell line: 1>()
----> 1 output = net(s, h, ep)

File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

Input In [68], in NeuralNetHardeningModel.forward(self, s, h, ep)
    101     y = torch.stack((s_eval, h_eval, ep_eval), 1)
    103     print(y.shape, 'y')
--> 105     y1 = self.nn(y)
    107     print(y1.shape, 'y1')
    109 return y1

File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/container.py:139, in Sequential.forward(self, input)
    137 def forward(self, input):
    138     for module in self:
--> 139         input = module(input)
    140     return input

File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

File ~/opt/anaconda3/lib/python3.9/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
    113 def forward(self, input: Tensor) -> Tensor:
--> 114     return F.linear(input, self.weight, self.bias)

RuntimeError: mat2 must be a matrix, got 1-D tensor

經過一番檢查,我無法弄清楚如何解決這個錯誤,但我懷疑這可能與我為神經網絡中的每一層分配權重的方式有關。

這是我用來定義我的 PyTorch 神經網絡的代碼:

# define the NN
def __init__(self, weight, bias, weight_last, bias_last):
        
    # weight.shape = [3,3,3]
    # bias.shape = [3,3]
    # weight_last = [3], last layer
    # bias_last = [1], last layer
    
    super(NeuralNetHardeningModel, self).__init__()
        
        
    self.weight = weight
    self.bias = bias
        
    self.weight_last = weight_last
    self.bias_last = bias_last
        
        
    self.nn = nn.Sequential(
                
       nn.Linear(3, 3),
       nn.ReLU(),
                
       nn.Linear(3, 3),
       nn.ReLU(),
               
       nn.Linear(3, 3),
       nn.ReLU(),
                
       nn.Linear(3, 1)
            )
        
    if len(weight.shape) == 3:

       with torch.no_grad():

           self.nn[0].weight = nn.Parameter(weight[0])
           self.nn[0].bias = nn.Parameter(bias[0])

           self.nn[2].weight = nn.Parameter(weight[1])
           self.nn[2].bias = nn.Parameter(bias[1])

           self.nn[4].weight = nn.Parameter(weight[2])
           self.nn[4].bias = nn.Parameter(bias[2])

           self.nn[6].weight = nn.Parameter(weight_last)
           self.nn[6].bias = nn.Parameter(bias_last)

這是我用來為 PyTorch 神經網絡定義前向傳遞的代碼:

# forward method for the NN
def forward(self, a, b, c):

  for i in range(a.shape[0]): 

     a_eval = torch.flatten(a)  # [70,1]
     b_eval = torch.flatten(b)  # [70,1,1]
     c_eval = torch.flatten(c) # [70,1]
                
     y = torch.stack((a_eval, b_eval, c_eval), 1)
                
     y1 = self.nn(y)
                        
  return y1

對於這篇長篇文章,我深表歉意,非常感謝您的幫助。

weight_last的形狀必須是[1,3]而不是只是[3]以防止矩陣乘法錯誤。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM