![](/img/trans.png)
[英]Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor
[英]nn.CrossEntropyLoss() function results in torch.FloatTensor has no 'requires_gradient' attribute error
我正在使用Resnet18
預訓練模型。 因此,基本上,我采用模型的“輸出”,然后調用CrossEntropyLoss()
。 令模型的輸出為“輸出”,“標簽”為類標簽。 因此,我的CrossEntropyLoss(output,labels)
被調用。 我檢查了“輸出”的類型,它是<tensor.autograde.variable.Variable>
。 我也嘗試過使用“標簽”的不同組合。 首先,使其成為一個numpy數組,然后是一個變量。 但是似乎沒有任何作用。 我正在使用pytorch 0.3.1
。 請不要建議升級pytorch,因為在我目前的情況下可能無法完成。 我還附加了錯誤堆棧。 但是,它似乎可以在0.4.0版中使用。
標准函數是CrossEntropyLoss函數。
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-76-6d5f48373efd> in <module>()
5
6 # Train and evaluate
----> 7 model_ft, hist = train_model(model_ft, data, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=(model_name=="inception"))
<ipython-input-70-bfd03f976e97> in train_model(model, dataloaders, criterion, optimizer, num_epochs, is_inception)
47 labels=(torch.from_numpy(np.array([labels])))
48 #print(((outputs.requires_gradient)))
---> 49 loss = criterion(outputs, labels) ##calculate entropy loss
50
51 _, preds = torch.max(outputs, 1)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
675
676 def forward(self, input, target):
--> 677 _assert_no_grad(target)
678 return F.cross_entropy(input, target, self.weight, self.size_average,
679 self.ignore_index, self.reduce)
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in _assert_no_grad(variable)
8
9 def _assert_no_grad(variable):
---> 10 assert not variable.requires_grad, \
11 "nn criterions don't compute the gradient w.r.t. targets - please " \
12 "mark these variables as volatile or not requiring gradients"
AttributeError: 'torch.LongTensor' object has no attribute 'requires_grad'
我的代碼:
val_acc_history = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
count=0
for inputs, labels in dataloaders[phase]:
# zero the parameter gradients
optimizer.zero_grad()
outputs = model(inputs.unsqueeze(0)) ###input to the model and output porduced
labels=(torch.from_numpy(np.array([labels])))
loss = criterion(outputs, labels) ##calculate entropy loss
_, preds = torch.max(outputs, 1)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward() ### loss gradient going backward
optimizer.step() ### Optimizer performs parameter update based on current gradient
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
count=count+1
epoch_loss = running_loss / count
epoch_acc = running_corrects.double() / count
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
if phase == 'val':
val_acc_history.append(epoch_acc)
#print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model, val_acc_history
“””
這似乎是在版本v0.3.1張量不支持requires_grad
標志(嘗試尋找它)。 如果可能的話我建議升級PyTorch,它應該工作( 見requires_grad
在最新版本 )。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.