简体   繁体   English

来自特定神经元的权重和偏差的 pytorch 访问

[英]pytorch acess of weights and biases from a spcecific neuron

eg:例如:

input_size = 784
hidden_sizes = [128, 64]
output_size = 10

# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[0], hidden_sizes[1]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[1], output_size),
                      nn.Softmax(dim=1))

i want to acess all the weights and the bias of the N-TH neuron in a specific layer.我想访问特定层中 N-TH 神经元的所有权重和偏差。 I know that model.layer[1].weight gives the acess to all weights in a layer, but i also want to know of what neuron this weight is.我知道model.layer[1].weight访问层中的所有权重,但我也想知道这个权重是什么神经元。

Assume you have n neurons in the layer, The weight should be in order from neuron[0] to neuron[n].假设层中有 n 个神经元,权重应该是从神经元 [0] 到神经元 [n] 的顺序。 For example to access weights of a fully connected layer例如访问全连接层的权重

Parameter containing:
tensor([[-7.3584e-03, -2.3753e-02, -2.2565e-02,  ...,  2.1965e-02,
      1.0699e-02, -2.8968e-02], #1st neuron weights
    [ 2.2930e-02, -2.4317e-02,  2.9939e-02,  ...,  1.1536e-02,
      1.9830e-02, -1.4294e-02], #2nd neuron weights
    [ 3.0891e-02,  2.5781e-02, -2.5248e-02,  ..., -1.5813e-02,
      6.1708e-03, -1.8673e-02], #3rd neuron weights
    ...,
    [-1.2596e-03, -1.2320e-05,  1.9106e-02,  ...,  2.1987e-02,
     -3.3817e-02, -9.4880e-03], #nth neuron weights
    [ 1.4234e-02,  2.1246e-02, -1.0369e-02,  ..., -1.2366e-02,
     -4.7024e-04, -2.5259e-02], #(n+1)th neuron weights
    [ 7.5356e-03,  3.4400e-02, -1.0673e-02,  ...,  2.8880e-02,
     -1.0365e-02, -1.2916e-02] #(n+2)th neuron weights], requires_grad=True)

For instance例如

[-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02, 1.0699e-02, -2.8968e-02] will be all the weights of the 1st neuron [-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02, 1.0699e-02, -2.8968e-02]将是第一个神经元的所有权重

-7.3584e-03 is the weight to the 1st neuron in the next layer -7.3584e-03 是下一层第一个神经元的权重

-2.3753e-02 is the weight to the 2nd neuron in the next layer -2.3753e-02 是下一层第二个神经元的权重

-2.2565e-02 is the weight to the 3rd neuron in the next layer -2.2565e-02 是下一层第三个神经元的权重

[ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02, 1.9830e-02, -1.4294e-02] will be all the weights of the 2nd neuron [ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02, 1.9830e-02, -1.4294e-02]将是第二个神经元的所有权重

2.2930e-02 is the weight to the 1st neuron in the next layer 2.2930e-02 是下一层第一个神经元的权重

-2.4317e-02 is the weight to the 2nd neuron in the next layer -2.4317e-02 是下一层第二个神经元的权重

-2.2565e-02 is the weight to the 3rd neuron in the next layer -2.2565e-02 是下一层第三个神经元的权重

A weight w_ij^L connects two neurons, i-th neuron (in layer L+1) and j-th neuron (in layer L):权重 w_ij^L 连接两个神经元,第 i 个神经元(在 L+1 层中)和第 j 个神经元(在 L 层中):

model[2*L].weight[i, j]  # w_ij^L

where L = 0, 1, 2. Note: I use 2*L because the linear layers of your model are indexed as 0, 2 and 4.其中 L = 0, 1, 2。 注意:我使用 2*L 是因为模型的线性层被索引为 0、2 和 4。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM