簡體   English   中英

在激活 function 中出現錯誤“TypeError: 'Tensor' object is not callable”

[英]Getting an error in Activation function “TypeError: 'Tensor' object is not callable”

I am trying to use the relu activation function in pytorch LSTM but getting the error on " Tensor object is not callable. any guideline or help? Can i use the different activation function in forward propogation? but i am using the same activation function in hidden分層和前向傳播。您的友好評論將更有幫助。

class LSTM(nn.Module):
  def __init__(self, input_size=1, hidden_layer_size=20, output_size=1):
    super().__init__()

    self.hidden_layer_size = hidden_layer_size
    self.lstm = nn.LSTM(input_size, hidden_layer_size)
    self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     
    self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size),    
                        torch.zeros(1, 1, self.hidden_layer_size))
     def forward(self, input_seq):
    lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
    predictions = self.relu(lstm_out.view(len(input_seq), -1))
    return predictions[-1]

    model = LSTM()
    loss_function = nn.MSELoss()
    optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)

    epochs = 150

     for i in range(epochs):
       for seq, labels in train_inout_seq:
        optimizer.zero_grad()
        model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
                            torch.zeros(1, 1, model.hidden_layer_size))    
       y_pred = model(seq)

    single_loss = loss_function(y_pred, labels)
    single_loss.backward()
    optimizer.step()

     if i%25 == 1:
    print(f'epoch: {i:3} loss: {single_loss.item():10.8f}')

    print(f'epoch: {i:3} loss: {single_loss.item():10.10f}')

之后我收到一個錯誤:`

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-108-5fcfb471ed9a> in <module>
      6     model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
      7                          torch.zeros(1, 1, model.hidden_layer_size))    
----> 8     y_pred = model(seq)
      9 
     10     single_loss = loss_function(y_pred, labels)

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

<ipython-input-105-221892d3f487> in forward(self, input_seq)
     12   def forward(self, input_seq):
     13     lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
---> 14     predictions = self.relu(lstm_out.view(len(input_seq), -1))
     15     return predictions[-1]

TypeError: 'Tensor' object is not callable
self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     

此行並沒有真正定義ReLU function 以供進一步使用,而是ReLU function 應用於任意張量(即torch.FloatTensor(hidden_layer_size) )並返回結果張量! 所以self.relu不是 function 而是張量,因此是錯誤的。

一種補救措施是使用以下內容而不是上面的行:

self.relu = nn.ReLU()

這給出了一個nn.ReLU實例,您可以將其用作ReLU function,就像您在forward方法中調用它一樣。

另一種解決方案是根本不定義self.relu ,並在forward中使用這一行:

    predictions = nn.functional.relu(lstm_out.view(len(input_seq), -1))

這是模塊化方法和功能方法之間的區別。

出於某種原因,請參閱此處,了解為什么更喜歡彼此: https://discuss.pytorch.org/t/whats-the-difference-between-nn-relu-vs-f-relu/27599

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM