简体   繁体   中英

Can autograd handle repeated use of the same layer in the same depth of the computation graph?

I have a network which works as follows: The input is split in half; the first half is put through some convolutional layers l1 , then the second half is put through the same layers l1 (after the output for the first half of the input has been computed), then the two output representations are concatenated and put through additional layers l2 at once. Now my question (similar to Can autograd in pytorch handle a repeated use of a layer within the same module? but not quite the same setting as in the other question, the same layer was reused in different depths of the computation graph, whereas here, the same layer is used twice within the same depth) is: does autograd handle this properly? Ie is the backpropagation error for l1 computed with respect to both of its forward passes and the weights are adapted wrt both of these at once?

Autograd does not care how many times you "use" something. This is not how it works. it just builds a graph behind the scenes of the dependencies, using something twice just makes a graph that is not a line, but it will not affect its execution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM