简体   繁体   English

autograd 可以处理在计算图的相同深度中重复使用同一层吗?

[英]Can autograd handle repeated use of the same layer in the same depth of the computation graph?

I have a network which works as follows: The input is split in half;我有一个工作如下的网络:输入分成两半; the first half is put through some convolutional layers l1 , then the second half is put through the same layers l1 (after the output for the first half of the input has been computed), then the two output representations are concatenated and put through additional layers l2 at once.前半部分通过一些卷积层l1 ,然后后半部分通过相同的层l1 (在计算输入的前半部分的输出之后),然后将两个输出表示连接并通过附加层l2一次。 Now my question (similar to Can autograd in pytorch handle a repeated use of a layer within the same module? but not quite the same setting as in the other question, the same layer was reused in different depths of the computation graph, whereas here, the same layer is used twice within the same depth) is: does autograd handle this properly?现在我的问题(类似于Can autograd in pytorch 处理在同一模块中重复使用层?但与另一个问题中的设置不完全相同,同一层在计算图的不同深度中被重用,而在这里,同一层在同一深度内使用两次)是:autograd 是否正确处理此问题? Ie is the backpropagation error for l1 computed with respect to both of its forward passes and the weights are adapted wrt both of these at once?l1的反向传播误差是相对于它的两个前向传递计算的,并且权重是同时适应这两个?

Autograd does not care how many times you "use" something. Autograd 不在乎你“使用”了多少次。 This is not how it works.这不是它的工作方式。 it just builds a graph behind the scenes of the dependencies, using something twice just makes a graph that is not a line, but it will not affect its execution.它只是在依赖关系的幕后构建一个图,使用两次只会生成一个不是线的图,但不会影响其执行。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM