简体   繁体   English

我们如何使用 Pytorch Autograd 进行序列优化(在 for 循环中)?

[英]How can we use Pytorch Autograd for sequence optimization (in a for loop)?

I want to optimize a sequence in a for loop using Pytorch Autograd.我想使用 Pytorch Autograd 优化 for 循环中的序列。 I am using LBFGS .我正在使用LBFGS

loss = 0.0
for i in range(10):
     x = f(x,z[i])
     loss = loss + mse_loss(x,x_GT)

Say the sequence length is 10. I want to optimize x as well as z (z is a tensor array) , these are learnable parameters.假设序列长度为 10。我想优化xz (z 是张量数组)这些是可学习的参数。 Note the x will be updated in the loop.注意x将在循环中更新。

x_GT is ground truth data. x_GT是地面实况数据。

To run this, I have to open:要运行它,我必须打开:

loss.backward(retain_graph=True)

Is there a better way to do so (To make it run faster)?有没有更好的方法(让它运行得更快)?

The code you provided is actually perfectly fine:您提供的代码实际上非常好:

loss = torch.zeros(1)
for i in range(10):
     x = f(x, z[i])
     loss += mse_loss(x, x_GT)

It will accumulate the loss over the loop steps.它将在循环步骤中累积损失。 The backward pass only needs to be called once, though, so you are not required to retain the graph on it:但是,向后传递只需要调用一次,因此您不需要在其上保留图形:

>>> loss.backward()

I don't believe not retaining the graph will make your code run any faster.我不相信保留图表会使您的代码运行得更快。 It only adds to the memory load since it has to save all activations on the graph, expecting a second backward pass to come.它只会增加内存负载,因为它必须保存图上的所有激活,期待第二次向后传递。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM