简体   繁体   English

在 Python FastAPI 中根据 API 请求记录 UUID

[英]Logging UUID per API request in Python FastAPI

I have a pure python package(let's call it main) that has a few functions for managing infrastructure.我有一个纯 python 包(我们称它为 main),它具有一些用于管理基础结构的功能。 Alongside, I have created a FastAPI service that can make calls to the main module to invoke functionality as per need.此外,我还创建了一个 FastAPI 服务,可以根据需要调用主模块来调用功能。

For logging, I'm using loguru .对于日志记录,我正在使用loguru The API on startup creates a loguru instance, settings are applied and a generic UUID is set (namely, [main]).启动时的 API 创建一个 loguru 实例,应用设置并设置通用 UUID(即 [main])。 On every incoming request to the API, a pre_request function generates a new UUID and calls the loguru to configure with that UUID.对于 API 的每个传入请求,pre_request function 都会生成一个新的 UUID 并调用 loguru 以使用该 UUID 进行配置。 At the end of the request, the UUID is set back to default UUID [main].在请求结束时,UUID 被设置回默认 UUID [main]。

The problem that I'm facing is on concurrent requests, the new UUID takes over and all the logs are now being written with the UUID that was configured latest.我面临的问题是并发请求,新的 UUID 接管并且所有日志现在都使用最新配置的 UUID 写入。 Is there a way I can instantiate the loguru module on every request and make sure there's no cross logging happening for parallelly processed API requests?有没有一种方法可以在每个请求上实例化 loguru 模块,并确保并行处理的 API 请求不会发生交叉日志记录?

Implementation:执行:

In init .py of the main package:在主package的 init.py 中:

from loguru import logger 
logger.remove() #to delete all existing default loggers 
logger.add(filename, format, level, retention, rotation)  #format
logger.configure(extra={"uuid": "main"}) 

In all modules, the logger is imported as在所有模块中,记录器被导入为

from loguru import logger 

In api/ package - on every new request, I have this below code block:在 api/ package - 在每个新请求中,我都有下面的代码块:

uuid = get_uuid() #calling util func to get a new uuid
logger.configure(uuid=uuid) 
# Here onwards, all log messages contain this uuid 
# At the end of the request, I configure it back to default uuid (i.e. "main") 

The configure method is updating the root logger, I tried using the bind method instead, which according to the loguru docs, can be used to contextualize extra record attributes, but it does not seem to have any effect (I still see the default UUID, ie "main", only when I use.configure the UUID gets set).配置方法正在更新根记录器,我尝试使用绑定方法,根据 loguru 文档,它可用于将额外的记录属性上下文化,但它似乎没有任何效果(我仍然看到默认的 UUID,即“主要”,仅当我使用.configure UUID 设置时)。

Any ideas on how should I go about setting the UUID, so that all concurrent requests to the API have their own UUID?关于我 go 应该如何设置 UUID 的任何想法,以便对 API 的所有并发请求都有自己的 UUID? Since there are multiple sub-modules that get called to serve one API request and all of them have some logging in it, I need the UUID to persist for all the modules per request.由于有多个子模块被调用来为一个 API 请求提供服务,并且所有子模块都有一些登录信息,因此我需要 UUID 为每个请求的所有模块保留。 It seems like I need to have a logger instance per API request, but I am not sure how to instantiate it correctly to make this work.似乎我需要每个 API 请求都有一个记录器实例,但我不确定如何正确实例化它以使其工作。

The current implementation works if the API is serving one request, but the logging breaks when serving more than 1 call (since the UUID that gets logged is the last one that as configured)如果 API 正在服务一个请求,则当前实现有效,但在服务超过 1 个调用时记录中断(因为记录的 UUID 是配置的最后一个)

I created this middleware, which before routing calls, configures the logger instance with the UUID with the user of context manager:我创建了这个中间件,它在路由调用之前使用上下文管理器的用户使用 UUID 配置记录器实例:

from loguru import logger
from contextvars import ContextVar
from starlette.middleware.base import BaseHTTPMiddleware

_request_id = ContextVar("request_id", default=None)


def get_request_id():
    return _request_id.get()

class ContextualizeRequest(BaseHTTPMiddleware):
    async def dispatch(self, request, call_next):
        uuid = get_uuid()
        request_id = _request_id.set(uuid)  # set uuid to context variable
        with logger.contextualize(uuid=get_request_id()):
            try:
                response = await call_next(request)
            except Exception:
                logger.error("Request failed")
            finally:
                _request_id.reset()
                return response

The only caveat is, if you use multithreading, multi-processing, or create new event loops (in internal calls), the logger instance in that block will not have this UUID.唯一需要注意的是,如果您使用多线程、多处理或创建新的事件循环(在内部调用中),该块中的记录器实例将没有此 UUID。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM