简体   繁体   中英

RuntimeError: size mismatch, m1: [4 x 784], m2: [4 x 784] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:136

I have executed the following code

   import matplotlib.pyplot as plt
   import torch
   import torch.nn as nn
   import torch.optim as optim
   from torch.autograd import Variable
   from torch.utils import data as t_data
   import torchvision.datasets as datasets
   from torchvision import transforms
   data_transforms = transforms.Compose([transforms.ToTensor()])
  mnist_trainset = datasets.MNIST(root='./data', train=True,    
                           download=True, transform=data_transforms)
batch_size=4
dataloader_mnist_train = t_data.DataLoader(mnist_trainset, 
                                           batch_size=batch_size,
                                           shuffle=True
                                           )

def make_some_noise():
    return torch.rand(batch_size,100)


class generator(nn.Module):

    def __init__(self, inp, out):

        super(generator, self).__init__()

        self.net = nn.Sequential(
                                 nn.Linear(inp,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,1000),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(1000,800),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(800,out)
                                    )

    def forward(self, x):
        x = self.net(x)
        return x

class discriminator(nn.Module):

    def __init__(self, inp, out):

        super(discriminator, self).__init__()

        self.net = nn.Sequential(
                                 nn.Linear(inp,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,200),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(200,out),
                                 nn.Sigmoid()
                                    )

    def forward(self, x):
        x = self.net(x)
        return x

def plot_img(array,number=None):
    array = array.detach()
    array = array.reshape(28,28)

    plt.imshow(array,cmap='binary')
    plt.xticks([])
    plt.yticks([])
    if number:
        plt.xlabel(number,fontsize='x-large')
    plt.show()

d_steps = 100
g_steps = 100

gen=generator(4,4)
dis=discriminator(4,4)

criteriond1 = nn.BCELoss()
optimizerd1 = optim.SGD(dis.parameters(), lr=0.001, momentum=0.9)

criteriond2 = nn.BCELoss()
optimizerd2 = optim.SGD(gen.parameters(), lr=0.001, momentum=0.9)

printing_steps = 20

epochs = 5

for epoch in range(epochs):

    print (epoch)

    # training discriminator
    for d_step in range(d_steps):
        dis.zero_grad()

        # training discriminator on real data
        for inp_real,_ in dataloader_mnist_train:
            inp_real_x = inp_real
            break

        inp_real_x = inp_real_x.reshape(batch_size,784)
        dis_real_out = dis(inp_real_x)
        dis_real_loss = criteriond1(dis_real_out,
                              Variable(torch.ones(batch_size,1)))
        dis_real_loss.backward()

        # training discriminator on data produced by generator
        inp_fake_x_gen = make_some_noise()
        #output from generator is generated        
        dis_inp_fake_x = gen(inp_fake_x_gen).detach()
        dis_fake_out = dis(dis_inp_fake_x)
        dis_fake_loss = criteriond1(dis_fake_out,
                                Variable(torch.zeros(batch_size,1)))
        dis_fake_loss.backward()

        optimizerd1.step()



    # training generator
    for g_step in range(g_steps):
        gen.zero_grad()

        #generating data for input for generator
        gen_inp = make_some_noise()

        gen_out = gen(gen_inp)
        dis_out_gen_training = dis(gen_out)
        gen_loss = criteriond2(dis_out_gen_training,
                               Variable(torch.ones(batch_size,1)))
        gen_loss.backward()

        optimizerd2.step()

    if epoch%printing_steps==0:
        plot_img(gen_out[0])
        plot_img(gen_out[1])
        plot_img(gen_out[2])
        plot_img(gen_out[3])
        print("\n\n")

On running the code,following error is shown

 File "mygan.py", line 105, in <module>
    dis_real_out = dis(inp_real_x)
    RuntimeError: size mismatch, m1: [4 x 784], m2: [4 x 784] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:136

How can I resolve this?

I got the code from https://blog.usejournal.com/train-your-first-gan-model-from-scratch-using-pytorch-9b72987fd2c0

The error hints that the tensor you fed into the discriminator has incorrect shape. Now let's try to find out what the shape of the tensor is, and what shape is expected.

The tensor itself has a shape of [batch_size x 784] because of the reshape operation above. The discriminator network, on the other hand, expects a tensor with a last dimension of 4 . This is because the first layer in the discriminator network is nn.Linear(inp, 784) , where inp = 4 .

A linear layer nn.Linear(input_size, output_size) , expects the final dimension of the input tensor to be equal to input_size , and generates output with the final dimension projected to output_size . In this case, it expects an input tensor of shape [batch_size x 4] , and outputs a tensor of shape [batch_size x 784] .


And now to the real issue: the generator and discriminator that you defined has incorrect size. You seem to have changed the 300 dimension size from the blog post to 784 , which I assume is the size of your image (28 x 28 for MNIST). However, the 300 is not the input size, but rather a "hidden state size" -- the model uses a 300-dimensional vector to encode your input image.

What you should do here is to set the input size to 784 , and the output size to 1 , because the discriminator makes a binary judgment of fake (0) or real (1). For the generator, the input size should be equal to the "input noise" that you randomly generate, in this case 100 . The output size should also be 784 , because its output is the generated image, which should be the same size as the real data.

So, you only need to make the following changes to your code, and it should run smoothly:

gen = generator(100, 784)
dis = discriminator(784, 1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM