简体   繁体   中英

Using join() in python multiprocessing

My question pertains to the code below:

import multiprocessing
import math
import time


def do_work():

   for i in range(1,10,1):
      math.cos(i)


workers = [ multiprocessing.Process(target=do_work) for i in xrange(20) ]

for t in workers:
    t.daemon = True
    t.start()

time.sleep(100) # put here to simply indicate main is busy doing something

for t in workers:
    print t.name + " joining"
    t.join()

As you can see my parent process is sleeping for a long time before joining on child processes. And my child processes run real quick.

Question is:

Is it ok for main process to wait for a long time before joining on child processes as in the example above? Is there a danger that the child process will become Zombie by the time main process gets around to joining it ? Is there a problem with this code ? Is this bad code in some way ? How can I improve it ?

My attempt:

I tried to study the behavior. It seemed ok to me. But I guess once I did see a child process turn into a zombie, atleast ps output showed that.

A process becomes a zombie if its parent dies without acknowledging its exit status. This might happen, for example, if you forcefully terminate your script with a KILL signal.

If your parent process is busy doing something else and cannot call join() immediately after a child exits is totally fine.

The issue is when you keep creating processes without joining the old ones. In such cases you'll soon run out of resources in your OS.

In your specific example there's no harm at all as the children processes are set as daemon and you're joining them at a certain point. Keep in mind that join() blocks until the given process expires so you don't need to sleep to wait for it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM