[英]is there a way to limit how much gets submitted to a Pool of workers?
我有一個工作人員池,並使用apply_async
向他們提交工作。 我不在乎應用於每個項目的功能的結果。 該池似乎可以接受任意數量的apply_async
調用,無論數據有多大或工作人員可以跟上多少時間。
是否有一種方法可以在等待一定數量的項目處理時立即使apply_async
塊? 我確定在內部,該池正在使用一個隊列,因此僅對隊列使用最大大小會很瑣碎?
如果不支持此功能,那么提交大報告是否有意義,因為這看起來非常基本,添加起來卻很瑣碎?
如果為了完成這項工作而不得不本質上重新實現Pool的整個邏輯,那將是可恥的。
這是一些非常基本的代碼:
from multiprocessing import Pool
dowork(item):
# process the item (for side effects, no return value needed)
pass
pool = Pool(nprocesses)
for work in getmorework():
# this should block if we already have too many work waiting!
pool.apply_async(dowork, (work,))
pool.close()
pool.join()
像這樣嗎?
import multiprocessing
import time
worker_count = 4
mp = multiprocessing.Pool(processes=worker_count)
workers = [None] * worker_count
while True:
try:
for i in range(worker_count):
if workers[i] is None or workers[i].ready():
workers[i] = mp.apply_async(dowork, args=next(getmorework()))
except StopIteration:
break
time.sleep(1)
我不知道您期望每個工作人員完成的速度有多快, time.sleep
可能是必需的,也可能不是,或者可能需要一個不同的時間。
一種替代方法是直接使用Queue
:
from multiprocessing import Process, JoinableQueue
from time import sleep
from random import random
def do_work(i):
print(f"worker {i}")
sleep(random())
print(f"done {i}")
def worker():
while True:
item = q.get()
if item is None:
break
do_work(item)
q.task_done()
def generator(n):
for i in range(n):
print(f"gen {i}")
yield i
# 1 = allow generator to get this far ahead
q = JoinableQueue(1)
# 2 = maximum amount of parallelism
procs = [Process(target=worker) for _ in range(2)]
# and get them running
for p in procs:
p.daemon = True
p.start()
# schedule 10 items for processing
for item in generator(10):
q.put(item)
# wait for jobs to finish executing
q.join()
# signal workers to finish up
for p in procs:
q.put(None)
# wait for workers to actually finish
for p in procs:
p.join()
大部分是從示例Python的queue
模塊中偷來的:
https://docs.python.org/3/library/queue.html#queue.Queue.join
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.