[英]How to conditionally construct a tensor from two other tensors in PyTorch on the GPU?
一个例子:
import torch
pred = torch.tensor([1,2,1,0,0], device='cuda:0')
correct = torch.tensor([1,0,1,1,0], device='cuda:0')
assigned = torch.tensor([1,2,2,1,0], device='cuda:0')
我想要result = tensor([1,2,1,1,0], device='cuda:0')
。
基本上,当pred
与correct
相同时,则correct
else assigned
。
此外,我想从梯度计算中排除这种计算。
有没有办法在不迭代张量的情况下做到这一点?
torch.where
完全符合您的要求:
import torch
pred = torch.tensor([1,2,1,0,0], device='cuda:0')
correct = torch.tensor([1,0,1,1,0], device='cuda:0')
assigned = torch.tensor([1,2,2,1,0], device='cuda:0')
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1, 2, 1, 1, 0], device='cuda:0')
由于这些张量都没有requires_grad=True
,因此无需采取任何措施来避免梯度计算。 否则,您可以执行以下操作:
import torch
pred = torch.tensor([1.,2.,1.,0.,0.], device='cuda:0')
correct = torch.tensor([1.,0.,1.,1.,0.], device='cuda:0', requires_grad=True)
assigned = torch.tensor([1.,2.,2.,1.,0.], device='cuda:0', requires_grad=True)
with torch.no_grad():
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1, 2, 1, 1, 0], device='cuda:0')
如果您不使用torch.no_grad()
,您将拥有:
result = torch.where(pred == correct, correct, assigned)
print(result)
# >>> tensor([1., 2., 1., 1., 0.], device='cuda:0', grad_fn=<SWhereBackward>)
然后,可以使用以下方法将其与计算图分离:
result = result.detach()
print(result)
# >>> tensor([1., 2., 1., 1., 0.], device='cuda:0')
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.