简体   繁体   English

RuntimeError:给定输入大小:(512x1x1)。 计算出的 output 大小:(512x0x0)。 Output 尺寸太小

[英]RuntimeError: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small

I am working on Cardiac Ct images by implementing the 2DCNN model.我正在通过实现 2DCNN model 来处理心脏 Ct 图像。 I am trying to train vgg model.我正在尝试训练 vgg model。 But every time I got this issue.但是每次我遇到这个问题。 I know this is related to layers as I am new in PyTorch I am not able to understand.我知道这与层有关,因为我是 PyTorch 的新手,我无法理解。 If someone can guide me related to it如果有人可以指导我与之相关

Here is the error and code.这是错误和代码。 My torch.Size([2, 1, 65, 65])我的 torch.Size([2, 1, 65, 65])

Traceback (most recent call last):
  File "ct_pretrained.py", line 199, in <module>
    loss, metric = train(model, train_loader, optimizer)
  File "ct_pretrained.py", line 57, in train
    output = model(axial, sagittal, coronal, emr)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/data/heart_ct/torch/models/vgg.py", line 53, in forward
    sagittal_feature = self.sa_co_model(sagittal)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/miniconda/lib/python3.8/site-packages/torchvision/models/vgg.py", line 43, in forward
    x = self.features(x)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/container.py", line 117, in forward
    input = module(input)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/modules/pooling.py", line 153, in forward
    return F.max_pool2d(input, self.kernel_size, self.stride,
  File "/root/miniconda/lib/python3.8/site-packages/torch/_jit_internal.py", line 267, in fn
    return if_false(*args, **kwargs)
  File "/root/miniconda/lib/python3.8/site-packages/torch/nn/functional.py", line 585, in _max_pool2d
    return torch.max_pool2d(
RuntimeError: Given input size: (512x1x3). Calculated output size: (512x0x1). Output size is too small


import torch
import torch.nn as nn
from torchvision import models

__all__ = ['Vgg']

class Vgg(nn.Module):

    def __init__(self, is_emr=False, mode='sum'):
        super().__init__()
        self.is_emr = is_emr
        self.mode = mode
        in_dim = 45

        self.axial_model = models.vgg13(pretrained=True)
        out_channels = self.axial_model.features[0].out_channels
        self.axial_model.features[0] = nn.Conv2d(1, out_channels, kernel_size=7, stride=1, padding=0, bias=False)
        #self.axial_model.features[3] = nn.MaxPool2d(kernel_size=2, stride=2)
        self.axial_model.features[3] = nn.MaxPool2d(1)

        #self.axial_model.features[0] = nn.Conv2d(1, out_channels, kernel_size=3, stride =3, padding=0, bias=False)
        #self.axial_model.features[3] = nn.MaxPool2d(2)
        #self.axial_model.features[3] = nn.MaxPool2d(2, stride=3)
        #self.axial_model.features[3] = nn.Conv2d(1, out_channels, kernel_size=7, stride=1, padding=0, bias=False)

        num_ftrs = self.axial_model.classifier[6].in_features
        self.axial_model.classifier[6] = nn.Linear(num_ftrs, 15)

        self.sa_co_model = models.vgg13(pretrained=True)
        self.sa_co_model.features[0] = nn.Conv2d(1, out_channels, kernel_size=7, stride=1, padding=(3,0), bias=False)
        self.sa_co_model.features[3] = nn.MaxPool2d(1)
        #self.axial_model.features[3] = nn.MaxPool2d(kernel_size=2, stride=2)
        #self.sa_co_model.features[3] = nn.MaxPool2d(2, stride=(3,1))
        #self.sa_co_model.features[3] = nn.Conv2d(1, out_channels, kernel_size=7, stride=1, padding=(3,0), bias=False)

        self.sa_co_model.classifier[6] = nn.Linear(num_ftrs, 15)

        if self.is_emr:
            self.emr_model = EMRModel()
            if self.mode == 'concat': in_dim = 90

        self.classifier = Classifier(in_dim)

        #print(self.classifier)
  
    def forward(self, axial, sagittal, coronal, emr=None):
        #print(axial.shape)
        axial = axial[:,:,:-3,:-3]
        sagittal = sagittal[:,:,:,:-3]
        coronal = coronal[:,:,:,:-3]

        
        axial_feature = self.axial_model(axial)
        sagittal_feature = self.sa_co_model(sagittal)
        coronal_feature = self.sa_co_model(coronal)
        out = torch.cat([axial_feature, sagittal_feature, coronal_feature], dim=1)
        out = self.classifier(out)

        if self.is_emr:
            emr_feature = self.emr_model(emr)
            out += emr_feature

        return axial_feature 

class EMRModel(nn.Module):

    def __init__(self):
        super().__init__()

        self.layer = nn.Sequential(
            nn.Linear(7, 256),
            nn.BatchNorm1d(256),
            nn.LeakyReLU(negative_slope=0.2),
            nn.Dropout(p=0.2, inplace=True),
            nn.Linear(256, 256),
            nn.BatchNorm1d(256),
            nn.LeakyReLU(negative_slope=0.2),
            nn.Dropout(p=0.2, inplace=True),
            nn.Linear(256, 5),
        )

    def forward(self, x):
        return self.layer(x)

class Classifier(nn.Module):

    def __init__(self, in_dim):
        super().__init__()

        self.layer = nn.Sequential(
            nn.Linear(in_dim, 5)
        )

    def forward(self, x):
        return self.layer(x)

class ConvBN(nn.Module):

    def __init__(self, in_dim, out_dim, **kwargs):
        super().__init__()

        self.layer = nn.Sequential(
            nn.Conv2d(in_dim, out_dim, bias=False, **kwargs),
            nn.BatchNorm2d(out_dim),
            nn.LeakyReLU(negative_slope=0.2))

    def forward(self, x):
        return self.layer(x) 

'''
if __name__ == "__main__":
     images = torch.randn(2,1,65,65)
     model = Vgg()
     out = model(images,images,images)

     model = models.vgg16(pretrained=True)
     for k, v in model.state_dict().items():
         print(k)
  '''

Your input image (size 65x65 pix) is too small for VGG model that usually expects input images of size 224x224 pix.您的输入图像(尺寸 65x65 像素)对于通常需要 224x224 像素尺寸的输入图像的 VGG model 来说太小了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Matlab中512x512图像中的示例8x8补丁 - Sample 8x8 patch from 512x512 image in Matlab 图像的Matlab大小(x,y) - Matlab size of image (x,y) 如何在Java中读取图片并将其转换为字节数组?[以Lena.bmp 512x512为例] - How to read a image in java and turn it to byte array?[use Lena.bmp 512x512 as example] UIImages的图像大小,1024 x 1024? - image size for UIImages, 1024 x 1024? iOS中@xx大小的图像资源 - @1x size of image assets in iOS C# - 用白色字节填充图像字节以填充512 x 512 - C# - Padding image bytes with white bytes to fill 512 x 512 对于AppStore(iTunes Connect)的512x512图像,Apple会像在手机上一样进行转角处理吗? - 512x512 image for AppStore (iTunes Connect), will Apple do corner rounding like they do on the phone? graphicsmagick gm convert -size -resize即使“ wxh&gt;”也可以放大小图像 - graphicsmagick gm convert -size -resize scales up small images even with “w x h >” 索引错误:索引 512 超出了大小为 512 的轴 0 的范围 - Index Error: index 512 is out of bounds for axis 0 with size 512 解释HTTP标头x-ca-image-adapte-size x-ca-image-original-size - explain HTTP headers x-ca-image-adapte-size x-ca-image-original-size
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM