簡體   English   中英

Python 多處理 - 終止/重新啟動工作進程

[英]Python Multiprocessing - terminate / restart worker process

我有一堆長時間運行的進程,我想將它們分成多個進程。 那部分我可以做沒有問題。 我遇到的問題有時是這些進程 go 變成了掛起的 state。 為了解決這個問題,我希望能夠為進程正在處理的每個任務設置時間閾值。 當超過該時間閾值時,我想重新啟動或終止任務。

最初我的代碼使用進程池非常簡單,但是使用池我無法弄清楚如何檢索池內的進程,更不用說如何重新啟動/終止池中的進程。

我已使用隊列和進程對象,如本示例所示( https://pymotw.com/2/multiprocessing/communication.html#passing-messages-to-processes進行了一些更改。

我試圖弄清楚這一點在下面的代碼中。 在其當前的 state 中,該進程實際上並未終止。 除此之外,我無法弄清楚在當前任務終止后如何讓進程轉移到下一個任務。 任何建議/幫助表示贊賞,也許我會以錯誤的方式解決這個問題。

謝謝

    import multiprocess
    import time

    class Consumer(multiprocess.Process):
        def __init__(self, task_queue, result_queue, startTimes, name=None):
            multiprocess.Process.__init__(self)
            if name:
                self.name = name
            print 'created process: {0}'.format(self.name)
            self.task_queue = task_queue
            self.result_queue = result_queue
            self.startTimes = startTimes

        def stopProcess(self):
            elapseTime = time.time() - self.startTimes[self.name]
            print 'killing process {0} {1}'.format(self.name, elapseTime)
            self.task_queue.cancel_join_thread()
            self.terminate()
            # now want to get the process to start procesing another job

        def run(self):
            '''
            The process subclass calls this on a separate process.
            '''    
            proc_name = self.name
            print proc_name
            while True:
                # pulling the next task off the queue and starting it
                # on the current process.
                task = self.task_queue.get()
                self.task_queue.cancel_join_thread()

                if task is None:
                    # Poison pill means shutdown
                    #print '%s: Exiting' % proc_name
                    self.task_queue.task_done()
                    break
                self.startTimes[proc_name] = time.time()
                answer = task()
                self.task_queue.task_done()
                self.result_queue.put(answer)
            return

    class Task(object):
        def __init__(self, a, b, startTimes):
            self.a = a
            self.b = b
            self.startTimes = startTimes
            self.taskName = 'taskName_{0}_{1}'.format(self.a, self.b)

        def __call__(self):
            import time
            import os

            print 'new job in process pid:', os.getpid(), self.taskName

            if self.a == 2:
                time.sleep(20000) # simulate a hung process
            else:
                time.sleep(3) # pretend to take some time to do the work
            return '%s * %s = %s' % (self.a, self.b, self.a * self.b)

        def __str__(self):
            return '%s * %s' % (self.a, self.b)

    if __name__ == '__main__':
        # Establish communication queues
        # tasks = this is the work queue and results is for results or completed work
        tasks = multiprocess.JoinableQueue()
        results = multiprocess.Queue()

        #parentPipe, childPipe = multiprocess.Pipe(duplex=True)
        mgr = multiprocess.Manager()
        startTimes = mgr.dict()

        # Start consumers
        numberOfProcesses = 4
        processObjs = []
        for processNumber in range(numberOfProcesses):
            processObj = Consumer(tasks, results, startTimes)
            processObjs.append(processObj)

        for process in processObjs:
            process.start()

        # Enqueue jobs
        num_jobs = 30
        for i in range(num_jobs):
            tasks.put(Task(i, i + 1, startTimes))

        # Add a poison pill for each process object
        for i in range(numberOfProcesses):
            tasks.put(None)

        # process monitor loop, 
        killProcesses = {}
        executing = True
        while executing:
            allDead = True
            for process in processObjs:
                name = process.name
                #status = consumer.status.getStatusString()
                status = process.is_alive()
                pid = process.ident
                elapsedTime = 0
                if name in startTimes:
                    elapsedTime = time.time() - startTimes[name]
                if elapsedTime > 10:
                    process.stopProcess()

                print "{0} - {1} - {2} - {3}".format(name, status, pid, elapsedTime)
                if  allDead and status:
                    allDead = False
            if allDead:
                executing = False
            time.sleep(3)

        # Wait for all of the tasks to finish
        #tasks.join()

        # Start printing results
        while num_jobs:
            result = results.get()
            print 'Result:', result
            num_jobs -= 1

與重新實現Pool相比,一種更簡單的解決方案是繼續使用a設計一種機制,該機制會使您正在運行的功能超時。 例如:

from time import sleep
import signal

class TimeoutError(Exception):
    pass    

def handler(signum, frame):
    raise TimeoutError()

def run_with_timeout(func, *args, timeout=10, **kwargs):
    signal.signal(signal.SIGALRM, handler)
    signal.alarm(timeout)
    try:
        res = func(*args, **kwargs)
    except TimeoutError as exc:
        print("Timeout")
        res = exc
    finally:
        signal.alarm(0)
    return res


def test():
    sleep(4)
    print("ok")

if __name__ == "__main__":
    import multiprocessing as mp

    p = mp.Pool()
    print(p.apply_async(run_with_timeout, args=(test,),
                        kwds={"timeout":1}).get())

signal.alarm設置一個超時,當超時時,它將運行處理程序,該處理程序將停止執行功能。

編輯:如果您使用的是Windows系統,由於signal未實現SIGALRM ,這似乎有點復雜。 另一個解決方案是使用C級python API。 該代碼已從該SO答案改編而來,可以在64位系統上工作。 我只在linux上進行過測試,但在Windows上應該可以正常使用。

import threading
import ctypes
from time import sleep


class TimeoutError(Exception):
    pass


def run_with_timeout(func, *args, timeout=10, **kwargs):
    interupt_tid = int(threading.get_ident())

    def interupt_thread():
        # Call the low level C python api using ctypes. tid must be converted 
        # to c_long to be valid.
        res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
            ctypes.c_long(interupt_tid), ctypes.py_object(TimeoutError))
        if res == 0:
            print(threading.enumerate())
            print(interupt_tid)
            raise ValueError("invalid thread id")
        elif res != 1:
            # "if it returns a number greater than one, you're in trouble,
            # and you should call it again with exc=NULL to revert the effect"
            ctypes.pythonapi.PyThreadState_SetAsyncExc(
                ctypes.c_long(interupt_tid), 0)
            raise SystemError("PyThreadState_SetAsyncExc failed")

    timer = threading.Timer(timeout, interupt_thread)
    try:
        timer.start()
        res = func(*args, **kwargs)
    except TimeoutError as exc:
        print("Timeout")
        res = exc
    else:
        timer.cancel()
    return res


def test():
    sleep(4)
    print("ok")


if __name__ == "__main__":
    import multiprocessing as mp

    p = mp.Pool()
    print(p.apply_async(run_with_timeout, args=(test,),
                        kwds={"timeout": 1}).get())
    print(p.apply_async(run_with_timeout, args=(test,),
                        kwds={"timeout": 5}).get())

我通常建議不要將multiprocessing.Process子類化,因為它會使代碼難以閱讀。

我寧願將您的邏輯封裝在一個函數中,並在一個單獨的進程中運行它。 這樣可以使代碼更加簡潔直觀。

盡管如此,我還是建議您使用一些已經為您解決問題的庫,而不是重新發明輪子,例如Pebblebilliard

例如, Pebble庫允許輕松地為獨立運行或在Pool運行的進程設置超時。

在具有超時的單獨進程中運行函數:

from pebble import concurrent
from concurrent.futures import TimeoutError

@concurrent.process(timeout=10)
def function(foo, bar=0):
    return foo + bar

future = function(1, bar=2)

try:
    result = future.result()  # blocks until results are ready
except TimeoutError as error:
    print("Function took longer than %d seconds" % error.args[1])

相同的示例,但具有進程池。

with ProcessPool(max_workers=5, max_tasks=10) as pool:
   future = pool.schedule(function, args=[1], timeout=10)

   try:
       result = future.result()  # blocks until results are ready
    except TimeoutError as error:
        print("Function took longer than %d seconds" % error.args[1])

在這兩種情況下,超時過程都會自動為您終止。

對於長時間運行的進程和/或長時間的迭代器,派生的工作人員可能會在一段時間后掛起。 為了防止這種情況,有兩種內置技術:

  • 在他們從隊列中交付maxtasksperchild任務后重新啟動工作人員。
  • timeout傳遞給pool.imap.next() ,捕獲 TimeoutError,並在另一個池中完成工作的 rest。

以下包裝器將兩者都實現為生成器。 這在用multiprocess替換 stdlib multiprocessing時也有效。

import multiprocessing as mp


def imap(
    func,
    iterable,
    *,
    processes=None,
    maxtasksperchild=42,
    timeout=42,
    initializer=None,
    initargs=(),
    context=mp.get_context("spawn")
):
    """Multiprocessing imap, restarting workers after maxtasksperchild tasks to avoid zombies.

    Example:
        >>> list(imap(str, range(5)))
        ['0', '1', '2', '3', '4']

    Raises:
        mp.TimeoutError: if the next result cannot be returned within timeout seconds.

    Yields:
        Ordered results as they come in.
    """
    with context.Pool(
        processes=processes,
        maxtasksperchild=maxtasksperchild,
        initializer=initializer,
        initargs=initargs,
    ) as pool:
        it = pool.imap(func, iterable)
        while True:
            try:
                yield it.next(timeout)
            except StopIteration:
                return

要捕獲 TimeoutError:

>>> import time
>>> iterable = list(range(10))
>>> results = []
>>> try:
...     for i, result in enumerate(imap(time.sleep, iterable, processes=2, timeout=2)):
...         results.append(result)
... except mp.TimeoutError:
...     print("Failed to process the following subset of iterable:", iterable[i:])
Failed to process the following subset of iterable: [2, 3, 4, 5, 6, 7, 8, 9]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM