In model.state_dict()
, model.parameters()
and model.named_parameters()
weights and biases of nn.Linear()
modules are contained separately, eq fc1.weight
and fc1.bias
. Is there a simple pythonic way to get both of them?
Expected example looks similar to this:
layer = model['fc1']
print(layer.weight)
print(layer.bias)
From the full model, no. There isn't. But you can get the state_dict()
of that particular Module
and then you'd have a single dict
with the weight
and bias
:
import torch
m = torch.nn.Linear(3, 5) # arbitrary values
l = m.state_dict()
print(l['weight'])
print(l['bias'])
The equivalent in your code would be:
layer = model.fc1.state_dict()
print(layer['weight'])
print(layer['bias'])
To extract the Values from a Layer.
layer = model['fc1']
print(layer.weight.data[0])
print(layer.bias.data[0])
instead of 0 index you can use which neuron values to be extracted.
>> nn.Linear(2,3).weight.data
tensor([[-0.4304, 0.4926],
[ 0.0541, 0.2832],
[-0.4530, -0.3752]])
You can recover the named parameters for each linear layer in your model like so:
from torch import nn
for layer in model.children():
if isinstance(layer, nn.Linear):
print(layer.state_dict()['weight'])
print(layer.state_dict()['bias'])
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.