[英]Jupyter Kernel crash/dies when use large Neural Network layer, any idea pls?
I am experimenting Autoencoder with Pytorch. 我正在使用Pytorch尝试自动编码器。 It seems when I use relatively larger neural network for instance nn.Linear(250*250, 40*40) as the first layer, the Jupyter kernel keep crashing.
似乎当我使用较大的神经网络(例如nn.Linear(250 * 250,40 * 40))作为第一层时,Jupyter内核不断崩溃。 when I use smaller layer size eg nn.Linear(250*250, 20*20).
当我使用较小的图层大小时,例如nn.Linear(250 * 250,20 * 20)。 the Jupyter kernel is ok.
Jupyter内核还可以。 Any idea how to fix this?
任何想法如何解决这个问题? So I can run larger network.Thank you.
这样我可以运行更大的网络了。谢谢。 The entire network is as below.
整个网络如下。
# model:
class AutoEncoder(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(250*250, 20*20),
nn.BatchNorm1d(20*20,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(20*20, 20*20),
nn.BatchNorm1d(20*20,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(20*20, 20*20),
nn.BatchNorm1d(20*20,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(20*20, 15*15),
nn.BatchNorm1d(15*15,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(15*15, 3),
nn.BatchNorm1d(3,momentum=0.5),
#nn.Dropout(0.5),
#nn.Tanh(),
#nn.Linear(5*5,5),
)
self.decoder = nn.Sequential(
#nn.Linear(5, 5*5),
#nn.BatchNorm1d(5*5,momentum=0.5),
#nn.Dropout(0.5),
#nn.Tanh(),
nn.Linear(3, 15*15),
nn.BatchNorm1d(15*15,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(15*15, 20*20),
nn.BatchNorm1d(20*20,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(20*20, 20*20),
nn.BatchNorm1d(20*20,momentum=0.5),
nn.Dropout(0.5),
nn.LeakyReLU(),
nn.Linear(20*20, 250*250),
nn.BatchNorm1d(250*250,momentum=0.5),
nn.Dropout(0.5),
nn.Sigmoid(),
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
I have found the root cause. 我找到了根本原因。 I am running a docker ubuntu image/package on windows.
我正在Windows上运行docker ubuntu映像/软件包。 the memory setting is set too low, when I increase the memory setting on docker.
当我在docker上增加内存设置时,内存设置太低。 my ubuntu environment got more memory, then I can larger matrix operations.
我的ubuntu环境获得了更多的内存,那么我可以进行更大的矩阵运算。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.