简体   繁体   English

无法在 AWS 中显示信息级别日志记录 Lambda

[英]Can't show info level logging in AWS Lambda

I'm trying to run a script in AWS lambda and I want to output info level logs to the console after the script runs.我正在尝试在 AWS lambda 中运行脚本,并且我希望在脚本运行后将 output 信息级别日志记录到控制台。 I've tried looking for help from This post on using logs in lambda buy haven't had any success.我曾尝试从这篇关于使用 lambda 中的日志的帖子中寻求帮助,但没有成功。 I think AWS Cloudwatch is overriding my configuration shown bellow.我认为 AWS Cloudwatch 正在覆盖我如下所示的配置。


import logging
# log configuration
logging.basicConfig(
    format='%(levelname)s: %(message)s',
    level=logging.INFO,
    encoding="utf-8"
    )

I want to set the logging level to logging.info.我想将日志记录级别设置为 logging.info。 How can I do this?我怎样才能做到这一点? The runtime is python 3.9运行时间为 python 3.9

from my understanding, I think one fix would be to add this:根据我的理解,我认为一种解决方法是添加以下内容:

logging.getLogger().setLevel('INFO')

I believe that logging.basicConfig(level=...) affects the minimum log level at which logs show up in the console, but across all loggers.我相信logging.basicConfig(level=...)会影响日志显示在控制台中的最低日志级别,但会影响所有记录器。 The one above explicitly sets the minimum enabled level for the root logger, ie logging.getLogger() .上面的一个显式设置了根记录器的最低启用级别,即logging.getLogger() The enabled level for each logger determines at what level messages will be logged - otherwise, each call to a logger method like logging.info is basically a no-op.每个记录器的启用级别决定了将在什么级别记录消息 - 否则,每次调用记录器方法(如logging.info )基本上都是空操作。

So essentially, the basicConfig and setLevel calls are separate, but they work together to determine if a library's logs are printed to the console.所以本质上, basicConfigsetLevel调用是分开的,但它们一起工作以确定库的日志是否打印到控制台。 For example, you can set basicConfig(level='DEBUG') so that the debug-level logs for all libraries should get printed out.例如,您可以设置basicConfig(level='DEBUG')以便打印出所有库的调试级别日志。 But if you want to make an exception for one library like botocore for example, you can use logging.getLogger('botocore').setLevel('WARNING') , and that will set the minimum enabled level for that library to WARNING, so only messages logged by this library above that minimum level get printed out to the console.但是如果你想为一个像botocore这样的库做一个例外,你可以使用logging.getLogger('botocore').setLevel('WARNING') ,这会将那个库的最低启用级别设置为警告,所以只有该库记录的高于最低级别的消息才会打印到控制台。

You actually need to know that Python Logging in lambdas is a bit odd - if you are using the default logging module, when a lambda is invoked and it begins to spin up its back end, it creates a logging handler there and attaches it to the name of the log group as the handler name - not the name attribute as basicConfig does by default when it creates a logging handler您实际上需要知道 Python 在 lambdas 中登录有点奇怪 - 如果您使用默认的日志记录模块,当调用 lambda 并且它开始启动其后端时,它会在那里创建一个日志记录处理程序并将其附加到日志组的名称作为处理程序名称——不是 basicConfig 在创建日志处理程序时默认使用的名称属性

As such, basicConfig, attempting to modify the handler based on the name attribute, does NOT find the handler created by the lambda invoke start up and so will not work to update your settings, where as in your top level lambda handler file, as it can see the one logging handler at its level (in the lambda) when you use getLogger and setLevel you set that handler yourself因此,basicConfig,试图根据名称属性修改处理程序,找不到由 lambda 调用启动创建的处理程序,因此将无法更新您的设置,就像在您的顶级 lambda 处理程序文件中一样,因为它当您使用 getLogger 和 setLevel 您自己设置该处理程序时,可以在其级别(在 lambda 中)看到一个日志记录处理程序

Therefor, if you just use getLogger() it works at the top most level (lambda handler and its file) because the lambda_handler is being imported into the backend code to run, so it can find the handler.因此,如果您只使用 getLogger() 它会在最顶层(lambda 处理程序及其文件)工作,因为 lambda_handler 被导入到后端代码中运行,因此它可以找到处理程序。

Any further imports however will be looking for the name path of the lambda_handler, and attaching their logging handler to that name path - meaning their logging statements will NOT show up in cloud watch.然而,任何进一步的导入都将寻找 lambda_handler 的名称路径,并将它们的日志处理程序附加到该名称路径——这意味着它们的日志语句不会出现在云监视中。

There are three solutions I have found:我找到了三种解决方案:

  1. use logger = getLogger() and setLevel in the lambda handler and its file - any further files you import, do NOT use getLogger - instead just use import logging - logging.INFO(message) to force the logger to look for a default handler and use that (note: This is not idea, you end up loosing a lot of control over your log files在 lambda 处理程序及其文件中使用 logger = getLogger() 和 setLevel - 您导入的任何其他文件,不要使用 getLogger - 而只是使用import logging - logging.INFO(message)强制记录器查找默认处理程序和使用它(注意:这不是主意,您最终会失去对日志文件的很多控制

  2. if you cannot use any additional libraries, then you have to write some code in your lambda handler to see if a current logging handler exists - if it does, grab that and adjust it as needed to populate down the rest of your imports.如果你不能使用任何额外的库,那么你必须在你的 lambda 处理程序中编写一些代码来查看当前的日志记录处理程序是否存在 - 如果存在,抓住它并根据需要调整它以填充导入的 rest 。 You can find code for that scattered around SO您可以找到散布在 SO 周围的代码

  3. if you can (and I generally dislike answers that say use this library, so thats why this is answer three even though I love this library) use aws_lambda_powertools and its module Logger - this is a very powerful logging module designed to work with your existing logging statements and aws handlers in python - and its an open source python project by aws themselves.如果可以(我通常不喜欢说使用这个库的答案,所以即使我喜欢这个库,这也是答案三)使用 aws_lambda_powertools 及其模块 Logger - 这是一个非常强大的日志记录模块,旨在与您现有的日志记录一起使用python 中的语句和 aws 处理程序 - 以及 aws 自己的开源项目 python。 Its got a lot of great tools in it besides just the logger, but the logger is very very awesome.除了记录器之外,它还有很多很棒的工具,但记录器非常非常棒。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM