繁体   English   中英

在 python-pytorch 中使用迁移学习时如何冻结参数

[英]How to freeze param when I use transfer learning in python-pytorch

我想通过迁移学习只学习第一层并修复(冻结)其他层的参数。

但我被要求 **requires_grad = True **。 我怎么解决这个问题? 以下是我们尝试的方法和遇到的错误的描述。

from efficientnet_pytorch import EfficientNet

model_b0 = EfficientNet.from_pretrained('efficientnet-b0')
num_ftrs = model_b0._fc.in_features
model_b0._fc = nn.Linear(num_ftrs, 10)

for param in model_b0.parameters():
    param.requires_grad = False

last_layer = list(model_b0.children())[-1]

print(f'except last layer: {last_layer}')
for param in last_layer.parameters():
    param.requires_grad = True



criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_b0.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

model_b0 = train_model(model_b0, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=3)

如果我改变requires_grad = True ,上面的代码可以运行。

错误是

      4 optimizer_ft = optim.SGD(model_b7.parameters(), lr=0.001, momentum=0.9)
      5 exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
----> 7 model_b0 = train_model(model_b7, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=15)

Cell In [69], line 43, in train_model(model, criterion, optimizer, scheduler, num_epochs)
     41 loss = criterion(outputs, labels)
---> 43 loss.backward()
     44 optimizer.step()

\site-packages\torch\_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
    394         create_graph=create_graph,
    395         inputs=inputs)
--> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)

\site-packages\torch\autograd\__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    172 # calls in the traceback and some print out the last line
--> 173 Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    174     tensors, grad_tensors_, retain_graph, create_graph, inputs,

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

感谢您阅读!

这个问题有几个可能的原因:

  1. 输入: RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

您传入的张量没有requires_grad=True

确保您的新Variable使用 requires_grad = True:

var_xs_h = Variable(xs_h.data, requires_grad=True)

  1. requires_grad冻结 model 的最后一层

正如 Pytorch 论坛版主ptrblck 所述

如果您为所有参数设置 requires_grad = False,则会出现错误消息,因为 Autograd 将无法计算任何梯度,因为没有参数需要它们。

我认为您的情况与后一种情况相似,您可以阅读第二篇文章。

ptrblck在调试中的另一个建议。

# standard use case
x = torch.randn(1, 1)
print(x.requires_grad)
# > False

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08c5610>
out.backward()
print(lin.weight.grad)
# > tensor([[-0.9785]])
print(x.grad)
# > None

# input requires grad
x = torch.randn(1, 1, requires_grad=True)
print(x.requires_grad)
# > True

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08d4640>
out.backward()
print(lin.weight.grad)
# > tensor([[1.6739]])
print(x.grad)
# >tensor([[0.0300]])

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM