简体   繁体   中英

Pytorch RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]

In a simple CNN that classifies 5 objects, I get a size mis-match error:

"RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120]" in the convolutional layer . 

my model.py file:

import torch.nn as nn
import torch.nn.functional as F

class FNet(nn.Module):


    def __init__(self,device):
        # make your convolutional neural network here
        # use regularization
        # batch normalization
        super(FNet, self).__init__()
        num_classes = 5
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 5)

    def forward(self, x):

        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))

        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x


    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

if __name__ == "__main__":
    net = FNet()

Complete Error:

Traceback (most recent call last):
  File "main.py", line 98, in <module>
    train_model('../Data/fruits/', save=True, destination_path='/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Data/fruits')
  File "main.py", line 66, in train_model
    outputs = model(images)
  File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mitesh/E yantra/task1#hc/Task 1/Task 1B/Code/model.py", line 28, in forward
    x = F.relu(self.fc1(x))
  File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/mitesh/anaconda3/envs/HC#850_stage1/lib/python3.6/site-packages/torch/nn/functional.py", line 1024, in linear
    return torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [1 x 7744], m2: [400 x 120] at /opt/conda/conda-bld/pytorch-cpu_1532576596369/work/aten/src/TH/generic/THTensorMath.cpp:2070

If you have a nn.Linear layer in your net, you cannot decide "on-the-fly" what the input size for this layer would be.
In your net you compute num_flat_features for every x and expect your self.fc1 to handle whatever size of x you feed the net. However, self.fc1 has a fixed size weight matrix of size 400x120 (that is expecting input of dimension 16*5*5=400 and outputs 120 dim feature). In your case the size of x translated to 7744 dim feature vector that self.fc1 simply cannot handle.

If you do want your network to be able to handle any size x , you can have a parameter-free interpolation layer resizing all x to the right size before self.fc1 :

x = F.max_pool2d(F.relu(self.conv2(x)), 2)  # output of conv layers
x = F.interpolate(x, size=(5, 5), mode='bilinear')  # resize to the size expected by the linear unit
x = x.view(x.size(0), 5 * 5 * 16)
x = F.relu(self.fc1(x))  # you can go on from here...

See torch.nn.functional.interpolate for more information.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM