简体   繁体   中英

PyTorch AutoEncoder - Decoded output dimension not the same as input

I am building a Custom Autoencoder to train on a dataset. My model is as follows

class AutoEncoder(nn.Module):
    def __init__(self):
        super(AutoEncoder,self).__init__()

        self.encoder = nn.Sequential(
        nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3,stride=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size=3,stride=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(in_channels = 64, out_channels = 128, kernel_size=3,stride=1),
        nn.ReLU(inplace=True),
        nn.Conv2d(in_channels=128,out_channels=256,kernel_size=5,stride=2),
        nn.ReLU(inplace=True),
        nn.Conv2d(in_channels=256,out_channels=512,kernel_size=5,stride=2),
        nn.ReLU(inplace=True),
        nn.Conv2d(in_channels=512,out_channels=1024,kernel_size=5,stride=2),
        nn.ReLU(inplace=True)
        )

        self.decoder = nn.Sequential(
        nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2),
        nn.ReLU(inplace=True),
        nn.ConvTranspose2d(in_channels=512,out_channels=256,kernel_size=5,stride=2),
        nn.ReLU(inplace=True),
        nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2),
        nn.ReLU(inplace=True),
        nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=1),
        nn.ReLU(inplace=True),
        nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=1),
        nn.ReLU(inplace=True),
        nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1),
        nn.ReLU(inplace=True)
        )


    def forward(self,x):
        x = self.encoder(x)
        print(x.shape)
        x = self.decoder(x)
        return x



def unit_test():
    num_minibatch = 16
    img = torch.randn(num_minibatch, 3, 512, 640).cuda(0)
    model = AutoEncoder().cuda()
    model = nn.DataParallel(model)
    output = model(img)
    print(output.shape)

if __name__ == '__main__':
    unit_test()

As you can see, my input dimension is (3, 512, 640) but my output after passing it through the decoder is (3, 507, 635). Am I missing something while adding the Conv2D Transpose layers ?

Any help would be appreciated. Thanks

The mismatch is caused by the different output shapes of ConvTranspose2d layer. You can add output_padding of 1 to first and third transpose convolution layer to solve this problem.

ie nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2, output_padding=1) and nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2, output_padding=1)

As per the documentation :

When stride > 1, Conv2d maps multiple input shapes to the same output shape. output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side.


Decoder layers' shapes before adding output_padding :

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
   ConvTranspose2d-1        [-1, 512, 123, 155]      13,107,712
              ReLU-2        [-1, 512, 123, 155]               0
   ConvTranspose2d-3        [-1, 256, 249, 313]       3,277,056
              ReLU-4        [-1, 256, 249, 313]               0
   ConvTranspose2d-5        [-1, 128, 501, 629]         819,328
              ReLU-6        [-1, 128, 501, 629]               0
   ConvTranspose2d-7         [-1, 64, 503, 631]          73,792
              ReLU-8         [-1, 64, 503, 631]               0
   ConvTranspose2d-9         [-1, 32, 505, 633]          18,464
             ReLU-10         [-1, 32, 505, 633]               0
  ConvTranspose2d-11          [-1, 3, 507, 635]             867
             ReLU-12          [-1, 3, 507, 635]               0

After adding padding:

================================================================
   ConvTranspose2d-1        [-1, 512, 124, 156]      13,107,712
              ReLU-2        [-1, 512, 124, 156]               0
   ConvTranspose2d-3        [-1, 256, 251, 315]       3,277,056
              ReLU-4        [-1, 256, 251, 315]               0
   ConvTranspose2d-5        [-1, 128, 506, 634]         819,328
              ReLU-6        [-1, 128, 506, 634]               0
   ConvTranspose2d-7         [-1, 64, 508, 636]          73,792
              ReLU-8         [-1, 64, 508, 636]               0
   ConvTranspose2d-9         [-1, 32, 510, 638]          18,464
             ReLU-10         [-1, 32, 510, 638]               0
  ConvTranspose2d-11          [-1, 3, 512, 640]             867
             ReLU-12          [-1, 3, 512, 640]               0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM