简体   繁体   English

Pytorch:CrossEntropyLoss 的多目标错误

[英]Pytorch: multi-target error with CrossEntropyLoss

So I was training a Conv.所以我正在训练一个转换。 Neural Network.神经网络。 Following are the essential details:以下是基本细节:

  • original label dim = torch.Size([64, 1])原始标签 dim = torch.Size([64, 1])
  • output from the net dim = torch.Size([64, 2])来自网络的输出 dim = torch.Size([64, 2])
  • loss type = nn.CrossEntropyLoss()损失类型 = nn.CrossEntropyLoss()
  • error = RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15错误 = 运行时错误:/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 不支持多目标

WHERE AM I WRONG..?我哪里错了..?

training:训练:

EPOCHS        = 5
LEARNING_RATE = 0.0001
BATCH_SIZE    = 64

net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)

loss_log = []
loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)

train function:火车功能:

def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32):
  print('Initiating Training..')  
  loss_func = nn.CrossEntropyLoss()

  # Iteration Begins
  for epoch in tqdm(range(EPOCHS)):
    # Iterate over every sample in the batch
    for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False):
        x, y = data
        net.zero_grad()

        #Compute the output
        output, sm = net(x)

        # Compute Train Loss
        loss = loss_func(output, y.to(device))

        # Backpropagate
        loss.backward()

        # Update Parameters
        optimizer.step()

        # LEARNING_RATE -= LEARNING_RATE*0.0005

    loss_log.append(loss)
    lr_log.append(LEARNING_RATE)

  return loss_log, lr_log

FULL ERROR:完全错误:

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-20-8deb9a27d3b4> in <module>()
     13 
     14 total_epochs += EPOCHS
---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     16 
     17 plt.plot(loss_log)

4 frames

<ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     21         # Compute Train Loss
     22         # print(output, y.to(device))
---> 23         loss = loss_func(output, y.to(device))
     24 
     25         # Backpropagate

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    914     def forward(self, input, target):
    915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
    917 
    918 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2019     if size_average is not None or reduce is not None:
   2020         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2022 
   2023 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1836                          .format(input.size(0), target.size(0)))
   1837     if dim == 2:
-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   1839     elif dim == 4:
   1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15

The problem is that your target tensor is 2-dimensional ( [64,1] instead of [64] ), which makes PyTorch think that you have more than 1 ground truth label per data.问题是你的目标张量是二维的( [64,1]而不是[64] ),这让 PyTorch 认为你每个数据有 1 个以上的真实标签。 This is easily fixed via loss_func(output, y.flatten().to(device)) .这可以通过loss_func(output, y.flatten().to(device))轻松解决。 Hope this helps!希望这可以帮助!

You wrote yourself the problem:你自己写的问题:

original label dim = torch.Size([64, 1]) <-- [0] or [1]
output from the net dim = torch.Size([64, 2]) <-- [0,1] or [1,0]

You need to change your target into one hot encoding.您需要将目标更改为一种热编码。 Moreover, if you're doing a binary classification I would suggest to change the model to return a single output unit and use binary_cross_entropy as a loss function.此外,如果您正在进行二元分类,我建议更改模型以返回单个输出单元并使用 binary_cross_entropy 作为损失函数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 pytorch错误:CrossEntropyLoss()中不支持多目标 - pytorch error: multi-target not supported in CrossEntropyLoss() pytorch:“不支持多目标”错误消息 - pytorch: “multi-target not supported” error message Pytorch:需要一维目标张量,不支持多目标 - Pytorch: 1D target tensor expected, multi-target not supported 如何解决这个问题(Pytorch RuntimeError:预期一维目标张量,不支持多目标) - how to solve this (Pytorch RuntimeError: 1D target tensor expected, multi-target not supported) 运行时错误:预期一维目标张量,不支持多目标 Pytorch - RuntimeError: 1D target tensor expected, multi-target not supported Pytorch 使用CNN超参数优化进行多目标回归时出错 - Error using CNN hyper parameter optimization for multi-target regression 多目标解决方案构建器 - Multi-target solution builder PyTorch nn.CrossEntropyLoss 运行时维度超出范围错误 - PyTorch nn.CrossEntropyLoss runtime dimension out of range error 多目标同时具有因变量的分类和回归功能? - Multi-target having dependent variables as both classification and regression? Pytorch CrossEntropyLoss Tensorflow 等价 - Pytorch CrossEntropyLoss Tensorflow Equivalent
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM