[英]packed_padded_sequence gives error when used with GPU
I am trying to set up an RNN capable of utilizing a GPU but packed_padded_sequence gives me a我正在尝试建立一个能够利用 GPU 的 RNN,但是packed_padded_sequence 给了我一个
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor
here is how I direct gpu computing这是我如何指导 GPU 计算
parser = argparse.ArgumentParser(description='Trainer')
parser.add_argument('--disable-cuda', action='store_true',
help='Disable CUDA')
args = parser.parse_args()
args.device = None
if not args.disable_cuda and torch.cuda.is_available():
args.device = torch.device('cuda')
torch.set_default_tensor_type(torch.cuda.FloatTensor)
else:
args.device = torch.device('cpu')
here is the relevant part of the code.这是代码的相关部分。
def Tensor_length(track):
"""Finds the length of the non zero tensor"""
return int(torch.nonzero(track).shape[0] / track.shape[1])
.
.
.
def forward(self, tracks, leptons):
self.rnn.flatten_parameters()
# list of event lengths
n_tracks = torch.tensor([Tensor_length(tracks[i])
for i in range(len(tracks))])
sorted_n, indices = torch.sort(n_tracks, descending=True)
sorted_tracks = tracks[indices].to(args.device)
sorted_leptons = leptons[indices].to(args.device)
# import pdb; pdb.set_trace()
output, hidden = self.rnn(pack_padded_sequence(sorted_tracks,
lengths=sorted_n.cpu().numpy(),
batch_first=True)) # this gives the error
combined_out = torch.cat((sorted_leptons, hidden[-1]), dim=1)
out = self.fc(combined_out) # add lepton data to the matrix
out = self.softmax(out)
return out, indices # passing indices for reorganizing truth
I have tried everything from casting sorted_n to a long tensor to having it be a list, but i aways seem to get the same error.我已经尝试了从将 sorted_n 转换为长张量到将其作为列表的所有方法,但我似乎遇到了同样的错误。 I have not worked with pytorch on gpu before and any advice will be greatly appreciated.我之前没有在 gpu 上使用 pytorch,任何建议将不胜感激。
Thanks!谢谢!
I assume you are using the GPU and probably on the Google Colab.我假设您正在使用GPU,并且可能使用的是 Google Colab。 Check yours device检查您的设备
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
you may solve this error by downgrading the torch version, if you are using colab the following command will help you:您可以通过降级火炬版本来解决此错误,如果您使用的是 colab,以下命令将帮助您:
!pip install torch==1.6.0 torchvision==0.7.0
once you downgrade the torch, this error of padded length will go.一旦你降级火炬,这个填充长度的错误就会消失。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.