[英]Conv3D model input tensor
I am new to PyTorch and I want to make a classifier for 3D DICOM MRIs.我是 PyTorch 的新手,我想为 3D DICOM MRI 制作分类器。 I want to use the pretrained resnet18 from monai library but I am confused with the input dimensions of the tensor.我想使用 monai 库中预训练的 resnet18,但我对张量的输入尺寸感到困惑。 The shape of the images in my dataloader is [2,160,256,256] where 2 is the batch_size, 160 is the number of dicom images for each patient and 256x256 is the dimension of the images.我的数据加载器中图像的形状是 [2,160,256,256],其中 2 是 batch_size,160 是每个患者的 dicom 图像数量,256x256 是图像的尺寸。 When I try to run the model I get this error: Expected 5-dimensional input for 5-dimensional weight [64, 3, 7, 7, 7], but got 4-dimensional input of size [2, 160, 256, 256] instead当我尝试运行 model 时,我收到此错误: Expected 5-dimensional input for 5-dimensional weight [64, 3, 7, 7, 7], but got 4-dimensional input of size [2, 160, 256, 256 ] 反而
If I unsqueeze the tensor before feeding it to the model I get: Given groups=1, weight of size [64, 3, 7, 7, 7], expected input[1, 2, 160, 256, 256] to have 3 channels, but got 2 channels instead Can anybody help me figure this out?如果我在将张量馈送到 model 之前解压张量,我会得到:给定组 = 1,大小为 [64、3、7、7、7] 的权重,预期输入 [1、2、160、256、256] 为 3频道,但有 2 个频道谁能帮我解决这个问题?
you need to add the channel dimension for each slice (which is one for MRIs).您需要为每个切片添加通道维度(对于 MRI 来说是一个)。 Eg your input should be of shape (2,1,160,256,256)例如,您的输入应该是形状 (2,1,160,256,256)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.