簡體   English   中英

將numpy數組圖像輸入到pytorch神經網絡中

[英]inputing numpy array images into pytorch neural net

我有一個圖像的numpy數組表示,我想把它變成張量,所以我可以通過我的pytorch神經網絡來提供它。

據我所知,神經網絡采用的轉換張量不是[100,100,3]而是[3,100,100],像素重新縮放,圖像必須分批。

所以我做了以下事情:

import cv2
my_img = cv2.imread('testset/img0.png')
my_img.shape #reuturns [100,100,3] a 3 channel image with 100x100 resolution
my_img = np.transpose(my_img,(2,0,1))
my_img.shape #returns [3,100,100] 
#convert the numpy array to tensor
my_img_tensor = torch.from_numpy(my_img)
#rescale to be [0,1] like the data it was trained on by default 
my_img_tensor *= (1/255)
#turn the tensor into a batch of size 1
my_img_tensor = my_img_tensor.unsqueeze(0)
#send image to gpu 
my_img_tensor.to(device)
#put forward through my neural network.
net(my_img_tensor)

但是這會返回錯誤:

RuntimeError: _thnn_conv2d_forward is not implemented for type torch.ByteTensor

問題是,您為網絡提供的輸入是ByteTensor類型,而只為循環操作實現了浮點運算。 請嘗試以下方法

my_img_tensor = my_img_tensor.type('torch.DoubleTensor')
# for converting to double tensor

來源PyTorch論壇

感謝AlbanD

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM