簡體   English   中英

PyTorch CrossEntropyLoss 維度超出范圍

[英]PyTorch CrossEntropyLoss DImension Out of Range

進口:

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

我有大小為50 x 37 = 1850的矢量化圖像,並且正在嘗試創建一個 CNN 來嘗試對這些圖像進行分類 - x_train包含矢量化圖像, y_train包含地面實況標簽。

data.shape
torch.Size([1850])

我創建了一個簡單的 CNN 來測試一下:

class Net(nn.Module):
    def __init__(self, num_classes):
        super(EigenfaceDenseNet, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(50*37,200),
            nn.ReLU(),
            nn.Linear(200,200),
            nn.ReLU(),
            nn.Linear(200, num_classes),
            nn.ReLU(),
        )
    
    def forward(self, x):
        x = x.view(-1, 50*37) # Flatten into single dimension
        return self.model(x)

然后我初始化了一個損失 function,網絡和優化器:

net = Net(10); # 10 == number of classes in dataset.
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)

我的訓練循環如下:

n_epochs = 3
for epoch in range(n_epochs):
    running_loss = 0.0
    for i, data in enumerate(zip(X_train, y_train)): # (index (image, label))
        inputs, labels = torch.tensor(data[0]), torch.tensor(data[1])
        outputs = net(inputs)
        print(inputs.shape)
        
        onehot_labels = torch.tensor([(float(1) if i == labels else 0) for i in range(n_classes)])
        
        print(outputs[0])
        print(onehot_labels)
        
        loss_v = criterion(outputs[0], onehot_labels)
        
        loss_v.backward()
        
        running_loss += loss_v.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
            running_loss = 0.0
print("Finished training")

在運行代碼時,我收到以下錯誤:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Input In [76], in <cell line: 3>()
     12 print(outputs[0])
     13 print(onehot_labels)
---> 15 loss_v = criterion(outputs[0], onehot_labels)
     17 loss_v.backward()
     19 running_loss += loss_v.item()

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\module.py:1102, in Module._call_impl(self, *input, **kwargs)
   1098 # If we don't have any hooks, we want to skip the rest of the logic in
   1099 # this function, and just call forward.
   1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102     return forward_call(*input, **kwargs)
   1103 # Do not call functions when jit is used
   1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\loss.py:1150, in CrossEntropyLoss.forward(self, input, target)
   1149 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1150     return F.cross_entropy(input, target, weight=self.weight,
   1151                            ignore_index=self.ignore_index, reduction=self.reduction,
   1152                            label_smoothing=self.label_smoothing)

File ~\.conda\envs\3710\lib\site-packages\torch\nn\functional.py:2846, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   2844 if size_average is not None or reduce is not None:
   2845     reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
  1. 這不是 CNN,CNN 是當您使用(至少一些)卷積層時。 您只使用了線性圖層。 無論如何,這對錯誤並不重要。

  2. 您可能不應該在 output 層中使用relu

  3. 為什么要使用輸出[0] 來計算損失? 我認為整個outputs張量包含 logit 值。 這應該可以修復錯誤。 如果您使用批量大小 1,則應使用輸出或將outputs[0]重塑為outputs[0].reshape(1,-1)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM