简体   繁体   中英

Argument bottleneck python multiprocessing

I am using pool.apply_async to parallelize my code. One of the arguments I am passing in is a set that I have stored in memory. Given I am passing in a pointer to the set (rather than the set itself) does multiprocessing make copies of the set for each CPU or is the same reference passed to each CPU? I assume that given the set is stored in memory, each CPU would receives it's own set. If that is not the case, would this be a bottleneck since each CPU would request the same object?

So in principle, if you create tasks for new processes, the data has to be copied over to the new processes, as the processes don't share memory (compared to threads). The details vary a bit from Operating System to Operating System (ie fork vs spawn) but in general the data has to be copied over.

Whether that is the bottleneck depends a bit on how much computation you are actually doing in the processes vs the amount of data that has to be transferred over to the child processes. I suggest to measure the times: (1) process start triggered by parent, (2) actual start of computation inside the child process and (3) end of computation. That should give you roughly the ramp-up (2-1) and the computation (3-2). With these numbers you can judge best if IO is the bottleneck or computation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM