简体   繁体   中英

How do I handle interleaved exceptions from different Gunicorn forks?

I have a Flask app running in a forked Gunicorn environment, but the stacktraces are getting interleaved in the logfile. Can each fork have its own logfile? or can each logger have exclusive access while writing to the log?

Can each fork have its own logfile?

Yes, although you probably don't need, or want, that. The easiest way to do this is to just stick os.getpid() somewhere in the filename.

or can each logger have exclusive access while writing to the log?

There are a few ways to do this, but the obvious one is to just replace the default threading.RLock in logging with a multiprocessing.RLock .

According to the docs , you do this by overriding createLock , acquire , and release . So:

class CrossProcessFileHandler(logging.FileHandler):
    def createLock(self):
        self.lock = multiprocessing.RLock()
    def acquire(self):
        self.lock.acquire()
    def release(self):
        self.lock.release()

And now just use that instead of FileHandler .

Just make sure to initializer the logger in the parent process; if each child creates its own separate cross-process lock, that won't help anything.


Note that if you care about cross-platform portability, the obvious trivial code might work as expected on POSIX but not on Windows. (I don't know enough about how gunicorn works on Windows to guess…) But you can deal with that by just not locking on Windows, because, by default, FileHandler opens the file for exclusive access, writes, and closes, meaning the filesystem is already doing your locking for you. (This trick doesn't work on POSIX because there is no such thing as Windows-style exclusive access—or, rather, there are equivalents on most platforms and filesystems, but they're not portable, and you have to go out of your way to do it instead of getting it by default whether you want it or not.)


The implementation of acquire and release for all built-in handlers for CPython 2.3 to 3.3 and every alternate implementation have always just been like this:

if self.lock:
    self.lock.acquire()

So, you'll see code that cheats by only overriding createLock . I've done that multiple times myself, and I've seen it in various different third-party projects. But really, the documentation doesn't guarantee that, so you should override the other two as well.

@abarnert solution works very well, however it requires to subclass every Handler used in the project. It may be simplified by the class decorator:

def multiprocess_handler(cls):
    class MultiProcessHandler(cls):
        def createLock(self):
            self.lock = multiprocessing.RLock()
    return MultiProcessHandler

MFileHandler = multiprocess_handler(logging.FileHandler)
MRotatingFileHandler = multiprocess_handler(logging.handlers.RotatingFileHandler)
# etc.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM