简体   繁体   English

如何处理来自不同Gunicorn叉的交错异常?

[英]How do I handle interleaved exceptions from different Gunicorn forks?

I have a Flask app running in a forked Gunicorn environment, but the stacktraces are getting interleaved in the logfile. 我有一个在分叉的Gunicorn环境中运行的Flask应用程序,但堆栈跟踪在日志文件中交叉存取。 Can each fork have its own logfile? 每个fork都可以拥有自己的日志文件吗? or can each logger have exclusive access while writing to the log? 或者每个记录器在写入日志时都可以独占访问吗?

Can each fork have its own logfile? 每个fork都可以拥有自己的日志文件吗?

Yes, although you probably don't need, or want, that. 是的,虽然你可能不需要或不想要。 The easiest way to do this is to just stick os.getpid() somewhere in the filename. 最简单的方法是在文件名中的某处粘贴os.getpid()

or can each logger have exclusive access while writing to the log? 或者每个记录器在写入日志时都可以独占访问吗?

There are a few ways to do this, but the obvious one is to just replace the default threading.RLock in logging with a multiprocessing.RLock . 有几种方法可以做到这一点,但显而易见的一种方法是在使用multiprocessing.RLock进行logging时替换默认的threading.RLock

According to the docs , you do this by overriding createLock , acquire , and release . 根据文档 ,您可以通过覆盖createLockacquirerelease So: 所以:

class CrossProcessFileHandler(logging.FileHandler):
    def createLock(self):
        self.lock = multiprocessing.RLock()
    def acquire(self):
        self.lock.acquire()
    def release(self):
        self.lock.release()

And now just use that instead of FileHandler . 现在只需使用它而不是FileHandler

Just make sure to initializer the logger in the parent process; 只需确保在父进程中初始化记录器; if each child creates its own separate cross-process lock, that won't help anything. 如果每个孩子都创建了自己独立的跨进程锁,那对任何事都无济于事。


Note that if you care about cross-platform portability, the obvious trivial code might work as expected on POSIX but not on Windows. 请注意,如果您关心跨平台可移植性,那么显而易见的简单代码可能会在POSIX上按预期工作,但在Windows上则无法工作。 (I don't know enough about how gunicorn works on Windows to guess…) But you can deal with that by just not locking on Windows, because, by default, FileHandler opens the file for exclusive access, writes, and closes, meaning the filesystem is already doing your locking for you. (我不太了解gunicorn如何在Windows上工作猜测......)但你可以通过不锁定Windows来解决这个问题,因为默认情况下, FileHandler打开文件以进行独占访问,写入和关闭,这意味着filesystem已经为你做了锁定。 (This trick doesn't work on POSIX because there is no such thing as Windows-style exclusive access—or, rather, there are equivalents on most platforms and filesystems, but they're not portable, and you have to go out of your way to do it instead of getting it by default whether you want it or not.) (这个技巧在POSIX上不起作用,因为没有Windows风格的独占访问 - 或者说,在大多数平台和文件系统上都有等价物,但它们不可移植,你必须离开你的无论你是否愿意,都可以这样做,而不是默认获取它。)


The implementation of acquire and release for all built-in handlers for CPython 2.3 to 3.3 and every alternate implementation have always just been like this: 对于CPython 2.3到3.3以及每个替代实现的所有内置处理程序的acquirerelease实现总是如下所示:

if self.lock:
    self.lock.acquire()

So, you'll see code that cheats by only overriding createLock . 所以,你会看到仅通过覆盖createLock而作弊的代码。 I've done that multiple times myself, and I've seen it in various different third-party projects. 我自己已经多次这样做了,我已经在各种不同的第三方项目中看到了它。 But really, the documentation doesn't guarantee that, so you should override the other two as well. 但实际上,文档并不能保证,所以你应该覆盖其他两个。

@abarnert solution works very well, however it requires to subclass every Handler used in the project. @abarnert解决方案非常有效,但它需要子类化项目中使用的每个Handler。 It may be simplified by the class decorator: 它可以由类装饰器简化:

def multiprocess_handler(cls):
    class MultiProcessHandler(cls):
        def createLock(self):
            self.lock = multiprocessing.RLock()
    return MultiProcessHandler

MFileHandler = multiprocess_handler(logging.FileHandler)
MRotatingFileHandler = multiprocess_handler(logging.handlers.RotatingFileHandler)
# etc.

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM