I am playing around with a library for my beginner students, and I'm using the multiprocessing module in Python. I ran into this problem: importing and using a module that uses multiprocessing without causing infinite loop on Windows
As an example, suppose I have a module mylibrary.py
:
# mylibrary.py
from multiprocessing import Process
class MyProcess(Process):
def run(self):
print "Hello from the new process"
def foo():
p = MyProcess()
p.start()
And a main program that calls this library:
# main.py
import mylibrary
mylibrary.foo()
If I run main.py
on Windows, it tries to import main.py into the new process, meaning the code is executed again which results in an infinite loop of process generation. I can fix it like so:
import mylibrary
if __name__ == "__main__":
mylibrary.foo()
But, this is pretty confusing for beginners, and moreover it seems like it shouldn't be necessary. The new process is being created in mylibrary
, so why doesn't the new process just import mylibrary
? Is there a way to work around this issue without having to change main.py
?
I am using Python 2.7, by the way.
Windows doesn't have fork
, so there's no way to make a new process just like the existing one. So the child process has to run your code again, but now you need a way to distinguish between the parent process and the child process, and __main__
is it.
This is covered in the docs here: http://docs.python.org/2/library/multiprocessing.html#windows
I don't know of another way to structure the code to avoid the fork bomb effect.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.