簡體   English   中英

如何使用 multiprocessing.Pool.apply_async 登錄到單個文件

[英]How to log to single file with multiprocessing.Pool.apply_async

我無法記錄到使用 multprocess.Pool.apply_async 的單個文件。 我正在嘗試改編 Logging Cookbook 中的這個示例,但它僅適用於multiprocessing.Process 將日志記錄隊列傳遞到apply_async似乎沒有效果。 我想使用一個池,以便我可以輕松管理並發線程的數量。

以下采用 multiprocessing.Process 的改編示例對我來說工作正常,除了我沒有從主進程獲取日志消息,而且我認為當我有 100 個大型作業時它不會很好地工作。

import logging
import logging.handlers
import numpy as np
import time
import multiprocessing
import pandas as pd
log_file = 'PATH_TO_FILE/log_file.log'

def listener_configurer():
    root = logging.getLogger()
    h = logging.FileHandler(log_file)
    f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
    h.setFormatter(f)
    root.addHandler(h)

# This is the listener process top-level loop: wait for logging events
# (LogRecords)on the queue and handle them, quit when you get a None for a
# LogRecord.
def listener_process(queue, configurer):
    configurer()
    while True:
        try:
            record = queue.get()
            if record is None:  # We send this as a sentinel to tell the listener to quit.
                break
            logger = logging.getLogger(record.name)
            logger.handle(record)  # No level or filter logic applied - just do it!
        except Exception:
            import sys, traceback
            print('Whoops! Problem:', file=sys.stderr)
            traceback.print_exc(file=sys.stderr)


def worker_configurer(queue):
    h = logging.handlers.QueueHandler(queue)  # Just the one handler needed
    root = logging.getLogger()
    root.addHandler(h)
    # send all messages, for demo; no other level or filter logic applied.
    root.setLevel(logging.DEBUG)


# This is the worker process top-level loop, which just logs ten events with
# random intervening delays before terminating.
# The print messages are just so you know it's doing something!
def worker_function(sleep_time, name, queue, configurer):
    configurer(queue)
    start_message = 'Worker {} started and will now sleep for {}s'.format(name, sleep_time)
    logging.info(start_message)
    time.sleep(sleep_time)
    success_message = 'Worker {} has finished sleeping for {}s'.format(name, sleep_time)
    logging.info(success_message)

def main_with_process():
    start_time = time.time()
    single_thread_time = 0.
    queue = multiprocessing.Queue(-1)
    listener = multiprocessing.Process(target=listener_process,
                                       args=(queue, listener_configurer))
    listener.start()
    workers = []
    for i in range(10):
        name = str(i)
        sleep_time = np.random.randint(10) / 2
        single_thread_time += sleep_time
        worker = multiprocessing.Process(target=worker_function,
                                         args=(sleep_time, name, queue, worker_configurer))
        workers.append(worker)
        worker.start()
    for w in workers:
        w.join()
    queue.put_nowait(None)
    listener.join()
    end_time = time.time()
    final_message = "Script execution time was {}s, but single-thread time was {}s".format(
        (end_time - start_time),
        single_thread_time
    )
    print(final_message)

if __name__ == "__main__":
    main_with_process()

但我無法讓以下適應工作:

def main_with_pool():
    start_time = time.time()
    queue = multiprocessing.Queue(-1)
    listener = multiprocessing.Process(target=listener_process,
                                       args=(queue, listener_configurer))
    listener.start()
    pool = multiprocessing.Pool(processes=3)
    job_list = [np.random.randint(10) / 2 for i in range(10)]
    single_thread_time = np.sum(job_list)
    for i, sleep_time in enumerate(job_list):
        name = str(i)
        pool.apply_async(worker_function,
                         args=(sleep_time, name, queue, worker_configurer))

    queue.put_nowait(None)
    listener.join()
    end_time = time.time()
    print("Script execution time was {}s, but single-thread time was {}s".format(
        (end_time - start_time),
        single_thread_time
    ))

if __name__ == "__main__":
    main_with_pool()

我嘗試了許多細微的變化,使用 multiprocessing.Manager、multiprocessing.Queue、multiprocessing.get_logger、apply_async.get(),但沒有任何工作。

我認為會有一個現成的解決方案。 我應該試試芹菜嗎?

謝謝

考慮使用兩個隊列。 第一個隊列是您為工作人員放置數據的地方。 作業完成后,每個工作人員會將結果推送到第二個隊列。 現在,使用第二個隊列將日志寫入文件。

實際上,這里有兩個相互獨立的問題:

  • 您不能將multiprocessing.Queue()對象作為參數傳遞給基於Pool的函數(可以將其傳遞給直接啟動的工作程序,但不能傳遞給它的任何“進一步”對象)。
  • 您必須等待所有異步工作程序完成,然后才能將None發送給偵聽器進程。

要修復第一個,請替換:

queue = multiprocessing.Queue(-1)

有:

queue = multiprocessing.Manager().Queue(-1)

因為可以傳遞經理管理的Queue()實例。

要解決第二個問題,請從每個異步調用收集每個結果,或者關閉池並等待它,例如:

pool.close()
pool.join()
queue.put_nowait(None)

或更復雜的:

getters = []
for i, sleep_time in enumerate(job_list):
    name = str(i)
    getters.append(
        pool.apply_async(worker_function,
                     args=(sleep_time, name, queue, worker_configurer))
    )
while len(getters):
    getters.pop().get()
# optionally, close and join pool here (generally a good idea anyway)
queue.put_nowait(None)

(您還應該考慮將put_nowait替換為等待版本的put而不要使用無限長度的隊列。)

[附錄] 關於maxtasksperchild=1
你真的不需要它。 重復消息的原因是:您反復向子進程的根記錄器添加queuehandlers處理程序。 以下代碼在添加另一個處理程序之前檢查是否存在任何處理程序:

def worker_configurer(queue):
    root = logging.getLogger()
    # print(f'{root.handlers=}')
    if len(root.handlers) == 0:
        h = logging.handlers.QueueHandler(queue)   
        root.addHandler(h)
        root.setLevel(logging.DEBUG)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM