简体   繁体   中英

python - Logging failure with multiprocessing

I am trying to implement logging with multiprocessing for our application(flask). We use python2.7, I am using the concept of queues to keep log requests from all the forks and logging records present in the queue. I followed this approach . Only change from that link is I am using TimedRotatatingFileHandler instead of RotatingFileHandler. This is my dictconfig

I am initializing the logger before initializing the forks and in code in the following way

    from flask import Flask
    from tornado.wsgi import WSGIContainer 
    from tornado.httpserver import HTTPServer
    from tornado.ioloop import IOLoop


    path = 'share/log_test/logging.yaml'
    if os.path.exists(path):
        with open(path, 'rt') as f:
            config = yaml.load(f.read())
        logging.config.dictConfig(config)

    logger = logging.getLogger('debuglog') # problem starts if i keep this statement

    app = Flask(__name__)
    init_routes(app) # initialize our routes
    server_conf = config_manager.load_config(key='server')
    logger.info("Logging is set up.") # only this line gets logged and other log statement to be logged by forks in code with same logger are not writing to the file.

    http_server = HTTPServer(WSGIContainer(app))

    http_server.bind(server_conf.get("PORT")) # port to listen
    http_server.start(server_conf.get("FORKS")) # number of forks
    IOLoop.current().start()

The problem I am facing is if i use getLogger in the code before initializing the forks, the forks are not writing logs to the logfile, only log statements before initializing forks are being logged. If I remove the logging.getLogger('debuglog') , forks are logging correctly.

I paused the execution flow and verified if the handler is assigned to logger or not but that seems to be fine

Why this strange behavior is observed?

Update: when I use another logger with the same file to write and everything is working fine. But when i use same logger it's not working. Anything related to RLock?

I got a workaround for this solution finally. I removed the concept of queues in the implementation and just printing then and there itself after receiving the log record.

    def emit(self, record):
        try:
            s = self._format_record(record)
            self._handler.emit(record) #emitting here itself
            # self.send(s) #stopped sending it to queue 
        except (KeyboardInterrupt, SystemExit):
            raise
        except:
            self.handleError(record)

which seems to be working fine with the following testing

8 workers - 200 requests - 50 concurrency

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM