[英]Pytorch RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]
In a simple CNN that classifies 5 objects, I get a size mis-match error: 在分类5个对象的简单CNN中,出现大小不匹配错误:
"RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]" in the convolutional layer .
my model.py file: 我的model.py文件:
import torch.nn as nn
import torch.nn.functional as F
class FNet(nn.Module):
def __init__(self,device):
# make your convolutional neural network here
# use regularization
# batch normalization
super(FNet, self).__init__()
num_classes = 5
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 5)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
if __name__ == "__main__":
net = FNet()
Complete Error: 完成错误:
Traceback (most recent call last):
File "main.py", line 98, in <module>
train_model('../Data/fruits/', save=True, destination_path='/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Data/fruits')
File "main.py", line 66, in train_model
outputs = model(images)
File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Code/model.py", line 28, in forward
x = F.relu(self.fc1(x))
File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/functional.py", line 1024, in linear
return torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120] at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/generic/THTensorMath.cpp:2070
If you have a nn.Linear
layer in your net, you cannot decide "on-the-fly" what the input size for this layer would be. 如果您的网络中有
nn.Linear
层,则无法“即时”确定该层的输入大小。
In your net you compute num_flat_features
for every x
and expect your self.fc1
to handle whatever size of x
you feed the net. 在您的网络中,您为每个
x
计算num_flat_features
,并期望self.fc1
能够处理您喂入网络的x
任何大小。 However, self.fc1
has a fixed size weight matrix of size 400x120 (that is expecting input of dimension 16*5*5=400 and outputs 120 dim feature). 但是,
self.fc1
具有大小为self.fc1
的固定大小权重矩阵(期望输入尺寸为16 * 5 * 5 = 400,并输出120暗淡特征)。 In your case the size of x
translated to 7744 dim feature vector that self.fc1
simply cannot handle. 在您的情况下,
x
的大小转换为self.fc1
无法处理的7744个self.fc1
特征向量。
If you do want your network to be able to handle any size x
, you can have a parameter-free interpolation layer resizing all x
to the right size before self.fc1
: 如果确实希望网络能够处理任意大小的
x
,则可以在无参数插值层中将所有x
调整为self.fc1
之前的正确大小:
x = F.max_pool2d(F.relu(self.conv2(x)), 2) # output of conv layers
x = F.interpolate(x, size=(5, 5), mode='bilinear') # resize to the size expected by the linear unit
x = x.view(x.size(0), 5 * 5 * 16)
x = F.relu(self.fc1(x)) # you can go on from here...
See torch.nn.functional.interpolate
for more information. 有关更多信息,请参见
torch.nn.functional.interpolate
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.