简体   繁体   English

pytorch 为什么要按维度加载Dataloader?

[英]pytorch Why is Dataloader loaded dimensionally?

img_path = 'G:/tiff/NC_H08_20220419_0600.tif'
img = io.imread(img_path).astype(np.float32)
print(img.shape)
data_tf = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
train_data = data_tf(img)
print(train_data.shape)
train_loader = DataLoader(dataset=train_data, batch_size=1)
print(len(train_loader))

result: (2486, 2755, 16) torch.Size([16, 2486, 2755]) 16结果: (2486, 2755, 16) torch.Size([16, 2486, 2755]) 16

I think len(train_loader) is 1,but now it is 16, I wonder why.我认为 len(train_loader) 是 1,但现在是 16,不知道为什么。

The DataLoader assumes you pass in a dataset, which is usually not a single piece of data. DataLoader假定您传入一个数据集,该数据集通常不是一条数据。 Therefore, it will interpret the first dimension usually as the batch dimension.因此,它通常会将第一个维度解释为批次维度。 So, in your case, it assumes you have 16 pieces of 2D data.因此,在您的情况下,它假设您有 16 条 2D 数据。

To solve it, add a batch dimension to your train_data .要解决它,请在您的train_data中添加一个批次维度。 (Or make a Dataset , but that seems like a hassle for your simple use case) (或者制作一个Dataset ,但这对于您的简单用例来说似乎很麻烦)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM