![](/img/trans.png)
[英]Python How to download multiple files in parallel using multiprocessing.pool
[英]Python multiprocessing.Pool: how to join the reasults in a parallel way?
我讀過Python multiprocessing.Pool:什么時候使用apply,apply_async或map? 它很有用,但仍然有我自己的問題。 在以下代碼中,我希望以並行方式result_list.append(result),我希望4個處理器並行追加結果並將4個列表轉換為1個列表。
import multiprocessing as mp
import time
def foo_pool(x):
time.sleep(2)
return x*x
result_list = []
def log_result(result):
# This is called whenever foo_pool(i) returns a result.
# result_list is modified only by the main process, not the pool workers.
result_list.append(result)
def apply_async_with_callback():
pool = mp.Pool(4)
for i in range(10):
pool.apply_async(foo_pool, args = (i, ), callback = log_result)
pool.close()
pool.join()
print(result_list)
if __name__ == '__main__':
apply_async_with_callback()
多處理池將是您的選擇。
以下是一些示例代碼,希望對您有所幫助。 您也可以查看另一個我的答案以查看更多詳細信息。 我怎樣才能使我的python代碼運行得更快
from multiprocessing import Pool
import time
def foo_pool(x):
return x*x
def main():
pool = Pool(4)
sampleData = [x for x in range(9)]
results = pool.map(foo_pool, sampleData)
pool.close()
pool.join()
print(results)
if __name__ == '__main__':
main()
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.