简体   繁体   English

如何在 PyTorch 中提取线性层的权重和偏差?

[英]How can I extract the weight and bias of Linear layers in PyTorch?

In model.state_dict() , model.parameters() and model.named_parameters() weights and biases of nn.Linear() modules are contained separately, eq fc1.weight and fc1.bias .model.state_dict() model.parameters()model.named_parameters()的重量和的偏差nn.Linear()模块分开容纳,当量fc1.weightfc1.bias Is there a simple pythonic way to get both of them?有没有一种简单的pythonic方法来获得它们?

Expected example looks similar to this:预期示例与此类似:

layer = model['fc1']
print(layer.weight)
print(layer.bias)

From the full model, no.从完整模型来看,没有。 There isn't.没有。 But you can get the state_dict() of that particular Module and then you'd have a single dict with the weight and bias :但是您可以获得该特定Modulestate_dict() ,然后您将拥有一个带有weightbias dict

import torch

m = torch.nn.Linear(3, 5)  # arbitrary values
l = m.state_dict()

print(l['weight'])
print(l['bias'])

The equivalent in your code would be:您的代码中的等效项是:

layer = model.fc1.state_dict()
print(layer['weight'])
print(layer['bias'])

To extract the Values from a Layer.从图层中提取值。

layer = model['fc1']
print(layer.weight.data[0])
print(layer.bias.data[0])

instead of 0 index you can use which neuron values to be extracted.您可以使用要提取的神经元值而不是 0 索引。

>> nn.Linear(2,3).weight.data
tensor([[-0.4304,  0.4926],
        [ 0.0541,  0.2832],
        [-0.4530, -0.3752]])

You can recover the named parameters for each linear layer in your model like so:您可以为模型中的每个线性层恢复命名参数,如下所示:

from torch import nn

for layer in model.children():
    if isinstance(layer, nn.Linear):
        print(layer.state_dict()['weight'])
        print(layer.state_dict()['bias'])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM