简体   繁体   English

从经过训练的 Pytorch 模型获取预测

[英]Get Predictions from Trained Pytorch Model

I am using transfer learning to fine tune an inception_v3 model.我正在使用迁移学习来微调 inception_v3 模型。 After I train the model and store the best version off I am attempting to use it to generate predictions for my test set.在我训练模型并存储最佳版本后,我尝试使用它为我的测试集生成预测。 Below is an example of my attempt on one image.下面是我对一张图像的尝试示例。

img_test=Image.open("img.png")

#Perform same transformations to image that the model used
transform_pipeline = transforms.Compose([
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
img_test = transform_pipeline(img_test)

# I believe this is adding in the batch size of 1, but in looking around online it looked like I needed it
img = img_test.unsqueeze(0)
img = Variable(img)

    
model_ft(img)

When I do the above I get当我执行上述操作时,我得到

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

Which seems to imply my model weights are on my gpu and the variable is on the cpu, how do I move one or the other over so I can use it, or reference one that is on the opposite processor?这似乎意味着我的模型权重在我的 gpu 上,而变量在 cpu 上,我如何移动一个或另一个以便我可以使用它,或者引用另一个处理器上的一个?

As the error said, it seem that the input of the model (your img_test) is in the cpu.正如错误所说,似乎模型的输入(您的img_test)在cpu中。

Try to move the image to cuda before send it through your pre-trained model:尝试将图像移动到 cuda,然后再通过您的预训练模型发送:

device = torch.device('cuda' if torch.cuda.is_available())
img_test = img_test.to(device)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM