简体   繁体   English

Pytorch nn.Linear 相同输入的不同输出

[英]Pytorch nn.Linear different output for same input

For learning purposes, I'm trying to build a simple perceptron with pytorch which should not be trained but just give the output for set weights.出于学习目的,我正在尝试使用 pytorch 构建一个简单的感知器,该感知器不应经过训练,而只是给出设置权重的输出。 Here's the code:这是代码:

import torch.nn
from torch import tensor

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            self.fc1.weight = torch.nn.Parameter(torch.ones_like(self.fc1.weight))

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

net = Net()
test_tensor = tensor([1, 1, 1])
print(net(test_tensor.float()).item())

I expect this single layer neural network to output 3. And that is roughly (!) the output for every execution, but it ranges from 2.5 to 3.5.我希望这个单层神经网络输出 3。这大致是(!)每次执行的输出,但范围从 2.5 到 3.5。 Where does randomness enter the model?随机性从哪里进入模型?

Q: Where does randomness enter the model?问:随机性从哪里进入模型?

It comes from the bias init.它来自bias初始化。 As you can see here , the bias is not initialized to zero as you expected.正如您在此处看到的, bias没有像您预期的那样初始化为零。

You can fix it this way:您可以通过以下方式修复它:

import torch
from torch import nn

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            torch.nn.init.ones_(self.fc1.weight)
            torch.nn.init.zeros_(self.fc1.bias)

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

x = torch.tensor([1., 1., 1.])
Net()(x)
# >>> tensor([3.], grad_fn=<ReluBackward0>)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM