简体   繁体   English

PyTorch CrossEntropyLoss 维度超出范围

[英]PyTorch CrossEntropyLoss DImension Out of Range

Imports:进口:

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

I have vectorised images of size 50 x 37 = 1850 and am trying to create a CNN to try to classify those images - x_train contains the vectorised images and y_train contains the ground truth labels.我有大小为50 x 37 = 1850的矢量化图像,并且正在尝试创建一个 CNN 来尝试对这些图像进行分类 - x_train包含矢量化图像, y_train包含地面实况标签。

data.shape
torch.Size([1850])

I have created a simple CNN to test things out:我创建了一个简单的 CNN 来测试一下:

class Net(nn.Module):
    def __init__(self, num_classes):
        super(EigenfaceDenseNet, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(50*37,200),
            nn.ReLU(),
            nn.Linear(200,200),
            nn.ReLU(),
            nn.Linear(200, num_classes),
            nn.ReLU(),
        )
    
    def forward(self, x):
        x = x.view(-1, 50*37) # Flatten into single dimension
        return self.model(x)

I then initialised a loss function, the net and optimiser:然后我初始化了一个损失 function,网络和优化器:

net = Net(10); # 10 == number of classes in dataset.
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)

My training loop is as follows:我的训练循环如下:

n_epochs = 3
for epoch in range(n_epochs):
    running_loss = 0.0
    for i, data in enumerate(zip(X_train, y_train)): # (index (image, label))
        inputs, labels = torch.tensor(data[0]), torch.tensor(data[1])
        outputs = net(inputs)
        print(inputs.shape)
        
        onehot_labels = torch.tensor([(float(1) if i == labels else 0) for i in range(n_classes)])
        
        print(outputs[0])
        print(onehot_labels)
        
        loss_v = criterion(outputs[0], onehot_labels)
        
        loss_v.backward()
        
        running_loss += loss_v.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
            running_loss = 0.0
print("Finished training")

And in running the code, I get the following error:在运行代码时,我收到以下错误:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Input In [76], in <cell line: 3>()
     12 print(outputs[0])
     13 print(onehot_labels)
---> 15 loss_v = criterion(outputs[0], onehot_labels)
     17 loss_v.backward()
     19 running_loss += loss_v.item()

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\module.py:1102, in Module._call_impl(self, *input, **kwargs)
   1098 # If we don't have any hooks, we want to skip the rest of the logic in
   1099 # this function, and just call forward.
   1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102     return forward_call(*input, **kwargs)
   1103 # Do not call functions when jit is used
   1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~\.conda\envs\3710\lib\site-packages\torch\nn\modules\loss.py:1150, in CrossEntropyLoss.forward(self, input, target)
   1149 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1150     return F.cross_entropy(input, target, weight=self.weight,
   1151                            ignore_index=self.ignore_index, reduction=self.reduction,
   1152                            label_smoothing=self.label_smoothing)

File ~\.conda\envs\3710\lib\site-packages\torch\nn\functional.py:2846, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
   2844 if size_average is not None or reduce is not None:
   2845     reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
  1. This is not a CNN, CNN is when you use (at least some) convolutional layers.这不是 CNN,CNN 是当您使用(至少一些)卷积层时。 You have used linear layers only.您只使用了线性图层。 Anyways, this is not important to the error.无论如何,这对错误并不重要。

  2. You should probably not use relu in the output layer.您可能不应该在 output 层中使用relu

  3. Why are you using outputs[0] to compute the loss?为什么要使用输出[0] 来计算损失? I think the whole outputs tensor contains logit values.我认为整个outputs张量包含 logit 值。 This should fix the error.这应该可以修复错误。 If you are using batch size 1, you should either use outputs or reshape outputs[0] as outputs[0].reshape(1,-1) .如果您使用批量大小 1,则应使用输出或将outputs[0]重塑为outputs[0].reshape(1,-1)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 PyTorch nn.CrossEntropyLoss 运行时维度超出范围错误 - PyTorch nn.CrossEntropyLoss runtime dimension out of range error IndexError:尺寸超出范围 - PyTorch 尺寸预计在 [-1, 0] 范围内,但得到 1 - IndexError: Dimension out of range - PyTorch dimension expected to be in range of [-1, 0], but got 1 在 Pytorch 中出现错误:IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) - Getting an Error in Pytorch: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) 在Pytorch中应用l2归一化时尺寸超出范围 - Dimension out of range when applying l2 normalization in Pytorch PyTorch nn.CrossEntropyLoss IndexError: 目标 2 越界 - PyTorch nn.CrossEntropyLoss IndexError: Target 2 is out of bounds Pytorch CrossEntropyLoss Tensorflow 等价 - Pytorch CrossEntropyLoss Tensorflow Equivalent Pytorch - nn.CrossEntropyLoss - Pytorch - nn.CrossEntropyLoss Pytorch:如何访问 CrossEntropyLoss() 梯度? - Pytorch: How to access CrossEntropyLoss() gradient? PyTorch dataset transform normalization nb_samples += batch_samples IndexError: Dimension out of range(预计在[-2, 1]范围内,但得到2) - PyTorch dataset transform normalization nb_samples += batch_samples IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) pytorch嵌入索引超出范围 - pytorch embedding index out of range
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM