繁体   English   中英

PyTorch CrossEntropyLoss 维度超出范围

[英]PyTorch CrossEntropyLoss DImension Out of Range

进口:

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

我有大小为50 x 37 = 1850的矢量化图像,并且正在尝试创建一个 CNN 来尝试对这些图像进行分类 - x_train包含矢量化图像, y_train包含地面实况标签。

data.shape
torch.Size([1850])

我创建了一个简单的 CNN 来测试一下:

class Net(nn.Module):
    def __init__(self, num_classes):
        super(EigenfaceDenseNet, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(50*37,200),
            nn.ReLU(),
            nn.Linear(200,200),
            nn.ReLU(),
            nn.Linear(200, num_classes),
            nn.ReLU(),
        )
    
    def forward(self, x):
        x = x.view(-1, 50*37) # Flatten into single dimension
        return self.model(x)

然后我初始化了一个损失 function,网络和优化器:

net = Net(10); # 10 == number of classes in dataset.
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)

我的训练循环如下:

n_epochs = 3
for epoch in range(n_epochs):
    running_loss = 0.0
    for i, data in enumerate(zip(X_train, y_train)): # (index (image, label))
        inputs, labels = torch.tensor(data[0]), torch.tensor(data[1])
        outputs = net(inputs)
        print(inputs.shape)
        
        onehot_labels = torch.tensor([(float(1) if i == labels else 0) for i in range(n_classes)])
        
        print(outputs[0])
        print(onehot_labels)
        
        loss_v = criterion(outputs[0], onehot_labels)
        
        loss_v.backward()
        
        running_loss += loss_v.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
            running_loss = 0.0
print("Finished training")

在运行代码时,我收到以下错误:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Input In [76], in <cell line: 3>()
     12 print(outputs[0])
     13 print(onehot_labels)
---> 15 loss_v = criterion(outputs[0], onehot_labels)
     17 loss_v.backward()
     19 running_loss += loss_v.item()

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\module.py:1102, in Module._call_impl(self, *input, **kwargs)
   1098 # If we don't have any hooks, we want to skip the rest of the logic in
   1099 # this function, and just call forward.
   1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102     return forward_call(*input, **kwargs)
   1103 # Do not call functions when jit is used
   1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\loss.py:1150, in CrossEntropyLoss.forward(self, input, target)
   1149 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1150     return F.cross_entropy(input, target, weight=self.weight,
   1151                            ignore_index=self.ignore_index, reduction=self.reduction,
   1152                            label_smoothing=self.label_smoothing)

File ~\.conda\envs\3710\lib\site-packages\torch\nn\functional.py:2846, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   2844 if size_average is not None or reduce is not None:
   2845     reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
  1. 这不是 CNN,CNN 是当您使用(至少一些)卷积层时。 您只使用了线性图层。 无论如何,这对错误并不重要。

  2. 您可能不应该在 output 层中使用relu

  3. 为什么要使用输出[0] 来计算损失? 我认为整个outputs张量包含 logit 值。 这应该可以修复错误。 如果您使用批量大小 1,则应使用输出或将outputs[0]重塑为outputs[0].reshape(1,-1)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM