[英]pytorch - how to troubleshoot device (cpu \ gpu) settings of tensors \ models
I have a torch model that i'm trying to port from CPU do be device independent.我有一个手电筒 model,我正在尝试从 CPU 移植它是独立于设备的。
setting the device parameter when creating tensors, or calling model.to(device) to move a full model to the target device solves part of the problem, but there are some "left behind" tensors (like variables created during the forward call)在创建张量时设置设备参数,或调用 model.to(device) 将完整的 model 移动到目标设备可解决部分问题,但仍有一些“遗留”张量(如前向调用期间创建的变量)
is there a way to detect these without using an interactive debugger?有没有办法在不使用交互式调试器的情况下检测到这些? something like tracing tensor creation to allow discover of tensors that are created on the wrong device?
诸如跟踪张量创建以允许发现在错误设备上创建的张量之类的东西?
You could check the garbage collector:你可以检查垃圾收集器:
import gc
import torch
s = torch.tensor([2], device='cuda:0')
t = torch.tensor([1])
for obj in gc.get_objects():
if torch.is_tensor(obj):
print(obj)
Output: Output:
tensor([2], device='cuda:0')
tensor([1])
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.