I am very new to Pytorch, so I apologise if the question is very straightforward. My problem is that I have defined class net1
and initialised its parameters randomly with a fixed manual seed.
random.seed(opt.manualSeed)
torch.manual_seed(opt.manualSeed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(opt.manualSeed)
class net1(nn.Module):
def __init__(self):
super(net1, self).__init__()
self.main_body = nn.Sequential(
# Define the layers...#
)
def forward(self, x):
return self.main_body(x)
# custom weights initialization called on net1
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
net1_ = net1()
net1_.apply(weights_init)
However, when I add another class net2
to the code:
class net2(nn.Module):
def __init__(self):
super(net2, self).__init__()
self.main_body = nn.Sequential(
# Define the layers
)
def forward(self, x):
return self.main_body(x)
net2_ = net2()
and instantiate it, even though I do not use it anywhere else and it is not connected to my main graph (which is built on net1_
), I get different outputs from my graph. Is it a reasonable outcome?
I assume the order of execution is:
random.seed(opt.manualSeed)
torch.manual_seed(opt.manualSeed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(opt.manualSeed)
if with_net2:
net2_ = net2()
net1_ = net1()
net1_.apply(weights_init)
If so, this is expected.
This is because when net2.__init__
is called (during net2_ = net2()
), torch's random number generator is used to randomly initialize weights in net2_
. Therefore the state of the random number generator at the execution net1_.apply
would be different if with_net2 = True
as compared to with_net2 = False
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.