简体   繁体   中英

Python Multiprocessing: pool.map vs using queues

I am trying to use the multiprocessing package for Python . In looking at tutorials, the clearest and most straightforward technique seems to be using pool.map , which allows the user to easily name the number of processes and pass pool.map a function and a list of values for that function to distribute across the CPUs. The other technique that I have come across is using queues to manage a pool of workers. This answer does an excellent job explaining the difference between pool.map , pool.apply , and pool.apply_async , but what are the advantages and disadvantages of using pool.map versus using queues like in this example ?

The pool.map technique is a "subset" of the technique with queues. That is, without having pool.map you can easily implement it using Pool and Queue . That said, using queues gives you much more flexibility in controlling your pool processes, ie you can make it so that particular types of messages are read only once per processes' lifetime, control the pool processes' shutdown behaviour, etc.

If you're really looking for the "clearest and most straightforward technique", using concurrent.futures.ProcessPoolExecutor is probably the easiest way. It has a map method as well as some other primitives that make it very usable. It is also compatible with Queue s.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM