簡體   English   中英

在 python-pytorch 中使用遷移學習時如何凍結參數

[英]How to freeze param when I use transfer learning in python-pytorch

我想通過遷移學習只學習第一層並修復(凍結)其他層的參數。

但我被要求 **requires_grad = True **。 我怎么解決這個問題? 以下是我們嘗試的方法和遇到的錯誤的描述。

from efficientnet_pytorch import EfficientNet

model_b0 = EfficientNet.from_pretrained('efficientnet-b0')
num_ftrs = model_b0._fc.in_features
model_b0._fc = nn.Linear(num_ftrs, 10)

for param in model_b0.parameters():
    param.requires_grad = False

last_layer = list(model_b0.children())[-1]

print(f'except last layer: {last_layer}')
for param in last_layer.parameters():
    param.requires_grad = True



criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_b0.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

model_b0 = train_model(model_b0, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=3)

如果我改變requires_grad = True ,上面的代碼可以運行。

錯誤是

      4 optimizer_ft = optim.SGD(model_b7.parameters(), lr=0.001, momentum=0.9)
      5 exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
----> 7 model_b0 = train_model(model_b7, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=15)

Cell In [69], line 43, in train_model(model, criterion, optimizer, scheduler, num_epochs)
     41 loss = criterion(outputs, labels)
---> 43 loss.backward()
     44 optimizer.step()

\site-packages\torch\_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
    394         create_graph=create_graph,
    395         inputs=inputs)
--> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)

\site-packages\torch\autograd\__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    172 # calls in the traceback and some print out the last line
--> 173 Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    174     tensors, grad_tensors_, retain_graph, create_graph, inputs,

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

感謝您閱讀!

這個問題有幾個可能的原因:

  1. 輸入: RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

您傳入的張量沒有requires_grad=True

確保您的新Variable使用 requires_grad = True:

var_xs_h = Variable(xs_h.data, requires_grad=True)

  1. requires_grad凍結 model 的最后一層

正如 Pytorch 論壇版主ptrblck 所述

如果您為所有參數設置 requires_grad = False,則會出現錯誤消息,因為 Autograd 將無法計算任何梯度,因為沒有參數需要它們。

我認為您的情況與后一種情況相似,您可以閱讀第二篇文章。

ptrblck在調試中的另一個建議。

# standard use case
x = torch.randn(1, 1)
print(x.requires_grad)
# > False

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08c5610>
out.backward()
print(lin.weight.grad)
# > tensor([[-0.9785]])
print(x.grad)
# > None

# input requires grad
x = torch.randn(1, 1, requires_grad=True)
print(x.requires_grad)
# > True

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08d4640>
out.backward()
print(lin.weight.grad)
# > tensor([[1.6739]])
print(x.grad)
# >tensor([[0.0300]])

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM