简体   繁体   中英

Python custom signal handling in processes pool

I am dealing with the following problem:

I've implemented a dummy 'Thing' class that sleeps for 10 seconds and logs a message ('foo'). This class is instantiated in a worker function for a Processes Pool and the 'foo' method that implements the above mentioned logic is called.

What I want to achieve is a custom signal handling: as long as the processes haven't terminated, if CTRL+C (SIGINT) is sent, each process will log the signal and they will immediately terminate.

Half of the logic is working: while each process is sleeping, on SIGINT, they'll be interrupted and the Pool will be closed.

Problem: if ALL the process end successfully and SIGINT is sent, the message will be logged but the Pool won't be closed.

Code:

import logging
import signal
import os
import time

from multiprocessing import Pool, current_process


logger = logging.getLogger('test')

SIGNAL_NAMES = dict((k, v) for v, k in reversed(sorted(signal.__dict__.items()))
                    if v.startswith('SIG') and not v.startswith('SIG_'))


class Thing(object):
    def __init__(self, my_id):
        self.my_id = my_id
        self.logger = logging.getLogger(str(my_id))

    def foo(self):
        time.sleep(10)
        self.logger.info('[%s] Foo after 10 secs!', self.my_id)


class Daemon(object):
    def __init__(self, no_processes, max_count):
        signal.signal(signal.SIGINT, self.stop)

        self.done = False
        self.count = 0
        self.max_count = max_count
        self.pool = Pool(no_processes, initializer=self.pool_initializer)

    def stop(self, signum, _):
        """ Stop function for Daemon """
        sig = SIGNAL_NAMES.get(signum) or signum
        logger.info('[Daemon] Stopping (received signal %s', sig)
        self.done = True

    def _generate_ids(self):
        """ Generator function of the IDs for the Processes Pool """
        while not self.done:
            if self.count < self.max_count:
                my_id = "ID-{}".format(self.count)
                logger.info('[Daemon] Generated ID %s', my_id)
                time.sleep(3)
                yield my_id
                self.count += 1
        time.sleep(1)

    def run(self):
        """ Main daemon run function """
        pid = os.getpid()
        logger.info('[Daemon] Started running on PID %s', str(pid))
        my_ids = self._generate_ids()

        for res in self.pool.imap_unordered(run_thing, my_ids):
            logger.info("[Daemon] Finished %s", res or '')

        logger.info('[Daemon] Closing & waiting processes to terminate')
        self.pool.close()
        self.pool.join()

    def pool_initializer(self):
        """ Pool initializer function """
        signal.signal(signal.SIGINT, self.worker_signal_handler)

    @staticmethod
    def worker_signal_handler(signum, _):
        """ Signal handler for the Process worker """
        sig = SIGNAL_NAMES.get(signum) or signum
        cp = current_process()
        logger.info("[%s] Received in worker %s signal %s", WORKER_THING_ID or '', str(cp), sig)

        global WORKER_EXITING
        WORKER_EXITING = True


WORKER_EXITING = False
WORKER_THING_ID = None


def run_thing(arg):
    """ Worker function for processes """
    if WORKER_EXITING:
        return

    global WORKER_THING_ID
    WORKER_THING_ID = arg
    run_exception = None

    logger.info('[%s] START Thing foo-ing', arg)
    logging.getLogger('Thing-{}'.format(arg)).setLevel(logging.INFO)
    try:
        thing = Thing(arg)
        thing.foo()
    except Exception as e:
        run_exception = e
    finally:
        WORKER_THING_ID = None
    logger.info('[%s] STOP Thing foo-ing', arg)

    if run_exception:
        logger.error('[%s] EXCEPTION on Thing foo-ing: %s', arg, run_exception)

    return arg


if __name__ == '__main__':
    logging.basicConfig()
    logger.setLevel(logging.INFO)
    daemon = Daemon(4, 3)
    daemon.run()

Your problem is logic in function _generate_ids() . The function never ends so pool.imap_unordered() never finishes by itself, only needs to be interrupted by CTRL-C.

Change it for something like this:

def _generate_ids(self):
    """ Generator function of the IDs for the Processes Pool """

    for i in range(self.max_count):
        time.sleep(3)
        my_id = "ID-{}".format(self.count)
        logger.info('[Daemon] Generated ID %s', my_id)
        if self.done:
            break
        self.count += 1
        yield my_id

And the processes end by themselves normally.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM