简体   繁体   English

PyTorch autograd - 只能为标量输出隐式创建grad

[英]PyTorch autograd — grad can be implicitly created only for scalar outputs

I am using the autograd tool in PyTorch , and have found myself in a situation where I need to access the values in a 1D tensor by means of an integer index. 我使用的是autograd在工具PyTorch ,并发现自己在一个情况下,我需要通过一个整数索引的方式来访问一维张量的值。 Something like this: 像这样的东西:

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    return Variable(torch.FloatTensor(res))


def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

I am getting the following error message: 我收到以下错误消息:

[tensor(1., grad_fn=<ThMulBackward>), tensor(4., grad_fn=<ThMulBackward>), tensor(9., grad_fn=<ThMulBackward>), tensor(16., grad_fn=<ThMulBackward>), tensor(25., grad_fn=<ThMulBackward>)]
Traceback (most recent call last):
  File "/home/mhy/projects/pytorch-optim/predict.py", line 74, in <module>
    print(get_grad(x_cloned, x))
  File "/home/mhy/projects/pytorch-optim/predict.py", line 68, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I am in general, a bit skeptical about how using the cloned version of a variable is supposed to keep that variable in gradient computation. 我一般来说,有点怀疑如何使用克隆版本的变量来保持该变量在梯度计算中。 The variable itself is effectively not used in the computation of A , and so when you call A.backward() , it should not be part of that operation. 变量本身实际上不用于计算A ,所以当你调用A.backward() ,它不应该是该操作的一部分。

I appreciate your help with this approach or if there is a better way to avoid losing the gradient history and still index through a 1D tensor with requires_grad=True ! 感谢您对此方法的帮助,或者是否有更好的方法可以避免丢失渐变历史记录,并且仍然使用requires_grad=True通过1D张量进行索引!

**Edit (September 15):** **编辑(9月15日):**

res is a list of zero-dimensional tensors containing squared values of 1 to 5. To concatenate in a single tensor containing [1.0, 4.0, ..., 25.0], I changed return Variable(torch.FloatTensor(res)) to torch.stack(res, dim=0) , which produces tensor([ 1., 4., 9., 16., 25.], grad_fn=<StackBackward>) . res是包含1到5的平方值的零维张量列表。要在包含[ return Variable(torch.FloatTensor(res)) ,...,25.0]的单张量中连接,我将return Variable(torch.FloatTensor(res))更改为torch.stack(res, dim=0) ,它产生tensor([ 1., 4., 9., 16., 25.], grad_fn=<StackBackward>)

However, I am getting this new error, caused by the A.backward() line. 但是,我收到了由A.backward()行引起的这个新错误。

Traceback (most recent call last):
  File "<project_path>/playground.py", line 22, in <module>
    print(get_grad(x_cloned, x))
  File "<project_path>/playground.py", line 16, in get_grad
    A.backward()
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 84, in backward
    grad_tensors = _make_grads(tensors, grad_tensors)
  File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 28, in _make_grads
    raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

in the basic_fun function, the res variable already is a torch-autograd-Variable you don't need to convert it again. 在basic_fun函数中,res变量已经是一个torch-autograd-Variable,你不需要再次转换它。 IMHO 恕我直言

def basic_fun(x_cloned):
    res = []
    for i in range(len(x)):
        res.append(x_cloned[i] * x_cloned[i])
    print(res)
    #return Variable(torch.FloatTensor(res))
    return res[0]

def get_grad(inp, grad_var):
    A = basic_fun(inp)
    A.backward()
    return grad_var.grad


x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

I changed my basic_fun to the following, which resolved my problem: 我将basic_fun更改为以下内容,这解决了我的问题:

def basic_fun(x_cloned):
    res = torch.FloatTensor([0])
    for i in range(len(x)):
        res += x_cloned[i] * x_cloned[i]
    return res

This version returns a scalar value. 此版本返回标量值。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 拥抱面(pytorch 变压器)上的 GPT2 运行时错误:只能为标量输出隐式创建 grad - GPT2 on Hugging face(pytorch transformers) RuntimeError: grad can be implicitly created only for scalar outputs RuntimeError(“只能为标量输出隐式创建grad”) - RuntimeError(“grad can be implicitly created only for scalar outputs”) 为什么 Pytorch autograd 需要标量? - Why does Pytorch autograd need a scalar? 使用 autograd.grad() 作为损失函数的参数(pytorch) - Using autograd.grad() as a parameter for a loss function (pytorch) pytorch autograd.grad 内部导数的内部工作 - Inner workings of pytorch autograd.grad for inner derivatives 如何使用 autograd.grad 计算 PyTorch 中的参数损失的 Hessian - How to compute Hessian of the loss w.r.t. the parameters in PyTorch using autograd.grad Pytorch LSTM grad 仅在最后一个输出上 - Pytorch LSTM grad only on last output PyTorch 如何仅用标量损失训练 NN? - How can PyTorch train NN with only scalar loss? pytorch 错误:“预期 isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) 为真,但结果为假” - pytorch error : " Expected isFloatingType(grad.scalar_type()) || (input_is_complex == grad_is_complex) to be true, but got false " pytorch的autograd可以处理torch.cat吗? - Can pytorch's autograd handle torch.cat?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM