简体   繁体   中英

pytorch autograd obstructs script from terminating

Whenever I call autograds backward my script never terminates. backward is not per se blocking, all lines after it are still executed, the script just does not terminate. It appears that there is some sort of working thread in the background which hangs, but I was not able to find any information about it.

I originally encountered the problem while training neural networks, however I eventually found a very short example with the same behavior:

import torch

x = torch.randn(3, requires_grad=True)
y = x * 2
print(y)

gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)

print("all done")

When I remove the backward line, the script finishes as expected. Otherwise I see a process call python in the task manager, if I terminate it by hand, the script execution also terminates.

I installed pytorch on Windows 7 using conda ( conda create --name grad_test pytorch -c pytorch ) in the most recent, stable version (python 3.7, pytorch 1.2.0).

它现在仍然存在,似乎是 Windows 7 特定的问题。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM