简体   繁体   English

与 GPU 一起使用时,packed_pa​​dded_sequence 会出错

[英]packed_padded_sequence gives error when used with GPU

I am trying to set up an RNN capable of utilizing a GPU but packed_padded_sequence gives me a我正在尝试建立一个能够利用 GPU 的 RNN,但是packed_pa​​dded_sequence 给了我一个

RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor

here is how I direct gpu computing这是我如何指导 GPU 计算

parser = argparse.ArgumentParser(description='Trainer')
parser.add_argument('--disable-cuda', action='store_true',
                    help='Disable CUDA')
args = parser.parse_args()
args.device = None
if not args.disable_cuda and torch.cuda.is_available():
    args.device = torch.device('cuda')
    torch.set_default_tensor_type(torch.cuda.FloatTensor)
else:
    args.device = torch.device('cpu')

here is the relevant part of the code.这是代码的相关部分。

def Tensor_length(track):
    """Finds the length of the non zero tensor"""
    return int(torch.nonzero(track).shape[0] / track.shape[1])
.
.
.
def forward(self, tracks, leptons):
        self.rnn.flatten_parameters()
        # list of event lengths
        n_tracks = torch.tensor([Tensor_length(tracks[i])
                                 for i in range(len(tracks))])
        sorted_n, indices = torch.sort(n_tracks, descending=True)
        sorted_tracks = tracks[indices].to(args.device)
        sorted_leptons = leptons[indices].to(args.device)
        # import pdb; pdb.set_trace()
        output, hidden = self.rnn(pack_padded_sequence(sorted_tracks,
                                                       lengths=sorted_n.cpu().numpy(),
                                                       batch_first=True)) # this gives the error

        combined_out = torch.cat((sorted_leptons, hidden[-1]), dim=1)
        out = self.fc(combined_out)  # add lepton data to the matrix
        out = self.softmax(out)
        return out, indices  # passing indices for reorganizing truth

I have tried everything from casting sorted_n to a long tensor to having it be a list, but i aways seem to get the same error.我已经尝试了从将 sorted_n 转换为长张量到将其作为列表的所有方法,但我似乎遇到了同样的错误。 I have not worked with pytorch on gpu before and any advice will be greatly appreciated.我之前没有在 gpu 上使用 pytorch,任何建议将不胜感激。

Thanks!谢谢!

I assume you are using the GPU and probably on the Google Colab.我假设您正在使用GPU,并且可能使用的是 Google Colab。 Check yours device检查您的设备

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device

you may solve this error by downgrading the torch version, if you are using colab the following command will help you:您可以通过降级火炬版本来解决此错误,如果您使用的是 colab,以下命令将帮助您:

!pip install torch==1.6.0 torchvision==0.7.0

once you downgrade the torch, this error of padded length will go.一旦你降级火炬,这个填充长度的错误就会消失。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 pack_padded_sequence - pad_packed_sequence 时训练精度下降和损失增加 - Training accuracy decrease and loss increase when using pack_padded_sequence - pad_packed_sequence 与GPU一起使用时Theano导入错误 - Theano import error when used with GPU Pytorch 和 CUDA 在使用 pack_padded_sequence 时抛出 RuntimeError - Pytorch with CUDA throws RuntimeError when using pack_padded_sequence 使用Anaconda在Tensorflow中升级到GPU会显示错误消息 - upgrading to gpu in tensorflow with anaconda gives error message 使用正则表达式的Cypher查询在Python中使用时会出错 - Cypher query with regex gives an error when used in Python 使用tensorflow-gpu时启动kernel报错 - An error ocurred while starting the kernel when I used tensorflow-gpu 在通用语音数据集上训练 DeepSpeech 在 gpu 上出现错误 - Train DeepSpeech on Common Voice dataset gives error on gpu 获取错误:“ValueError:字典更新序列元素#0的长度为1; 使用地图功能时需要2 - Getting an error: “ValueError: dictionary update sequence element #0 has length 1; 2 is required” when used map function 运行Tensorflow Sequence to Sequence Tutorial时出错 - Error when running Tensorflow Sequence to Sequence Tutorial 在python子进程中使用的Get-ACL命令中在注册表名称中使用括号时给出错误 - When using parenthesis in registry name in Get-ACL command used in subprocess in python gives error
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM