简体   繁体   English

pytorch - 如何对张量 \ 模型的设备 (cpu \ gpu) 设置进行故障排除

[英]pytorch - how to troubleshoot device (cpu \ gpu) settings of tensors \ models

I have a torch model that i'm trying to port from CPU do be device independent.我有一个手电筒 model,我正在尝试从 CPU 移植它是独立于设备的。

setting the device parameter when creating tensors, or calling model.to(device) to move a full model to the target device solves part of the problem, but there are some "left behind" tensors (like variables created during the forward call)在创建张量时设置设备参数,或调用 model.to(device) 将完整的 model 移动到目标设备可解决部分问题,但仍有一些“遗留”张量(如前向调用期间创建的变量)

is there a way to detect these without using an interactive debugger?有没有办法在不使用交互式调试器的情况下检测到这些? something like tracing tensor creation to allow discover of tensors that are created on the wrong device?诸如跟踪张量创建以允许发现在错误设备上创建的张量之类的东西?

You could check the garbage collector:你可以检查垃圾收集器:

import gc
import torch

s = torch.tensor([2], device='cuda:0')
t = torch.tensor([1])
for obj in gc.get_objects():
    if torch.is_tensor(obj):
        print(obj)

Output: Output:

tensor([2], device='cuda:0')
tensor([1])

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Pytorch CPU CUDA 设备负载无 gpu - Pytorch CPU CUDA device load without gpu 在 PyTorch 中的 CPU(而非 GPU)上对深度模型进行基准测试的最佳实践? - Best practices to benchmark deep models on CPU (and not GPU) in PyTorch? 是否需要清除 PyTorch 中的 GPU 张量? - Is it required to clear GPU tensors in PyTorch? 如何在pytorch中从gpu返回cpu? - How to return back to cpu from gpu in pytorch? 如何有条件地从 GPU 上的 PyTorch 中的其他两个张量构造一个张量? - How to conditionally construct a tensor from two other tensors in PyTorch on the GPU? Pytorch模型问题:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0 - Pytorch model problem: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0 Pytorch 给出错误 Expected all tensors to be on the same device,但发现至少有两个设备,cuda:0 和 cpu - Pytorch gives error Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu PyTorch CUDA:张量的“to(device)”与“device=device”? - PyTorch CUDA: "to(device)" vs "device=device" for tensors? pytorch:如何堆叠 2 个张量 - pytorch : How to stack 2 tensors PyTorch:是否可以将模型存储在CPU ram中,但是对于大型模型,可以在GPU上运行所有操作? - PyTorch: Is there a way to store model in CPU ram, but run all operations on the GPU for large models?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM