繁体   English   中英

pytorch vgg模型测试一张图像

[英]pytorch vgg model test on one image

我已经训练了vgg模型,这就是我转换测试数据的方式

test_transform_2= transforms.Compose([transforms.RandomResizedCrop(224), 
                                     transforms.ToTensor()])

test_data = datasets.ImageFolder(test_dir, transform=test_transform_2)

该模型的完成训练现在我要在单个图像上进行测试

from scipy import misc

test_image = misc.imread('flower_data/valid/1/image_06739.jpg')
vgg16(torch.from_numpy(test_image))

错误

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-60-b83587325fea> in <module>
----> 1 vgg16(torch.from_numpy(test_image))

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torchvision\models\vgg.py in forward(self, x)
     40 
     41     def forward(self, x):
---> 42         x = self.features(x)
     43         x = x.view(x.size(0), -1)
     44         x = self.classifier(x)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
     89     def forward(self, input):
     90         for module in self._modules.values():
---> 91             input = module(input)
     92         return input
     93 

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

c:\users\sam\mydocu~1\code\envs\data-science\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    299     def forward(self, input):
    300         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 301                         self.padding, self.dilation, self.groups)
    302 
    303 

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got input of size [628, 500, 3] instead

我可以告诉我需要调整输入的形状,但是我不知道如何基于似乎希望输入通知批次的方式。

您的图像是[h, w, 3] ,其中3表示rgb通道,而pytorch期望[b, 3, h, w] ,其中b是批处理大小。 因此,您可以通过调用reshaped = img.permute(2, 0, 1).unsqueeze(0)来进行reshaped = img.permute(2, 0, 1).unsqueeze(0) 我认为在某个地方也有实用程序功能,但是我现在找不到它。

所以你的情况

tensor = torch.from_numpy(test_image)
reshaped = tensor.permute(2, 0 1).unsqueeze(0)
your_result = vgg16(reshaped)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM