简体   繁体   中英

Monitoring on aws lambda

My python3 lambda functions process records from dynamodb. I am printing every step of my lambda execution in cloudwatch. Now I am at a stage of deploying and monitoring my lambdas in production. Is there a way I can know which records are executed by lambda as a whole in consolidated way?

I am also using X-ray to understand how much time and errors that my lambdas are taking. Besides, measuring the duration, invocation, errors. I want a way to know how many records are executed? thanks.

You could use CloudWatchLogs for logging in the custom log group and log stream.

You will be able to change in the configuration names group/stream during the deployment to different stages.

Check out how to do it with boto3 - Client.put_log_events

You could check my sample for NodeJS - there . The code is much simpler and graceful for python.

PS: Drop me a comment if you have any issue with converting.

While printing every line to log might help you to debug and troubleshoot your code, this is a very manual and not scaleable option. In addition, you would lose your mind going through endless logs.

In the serverless world (and specifically in AWS, where you have Lambda, DynamoDB, SQS, SNS, API Gateway, and lots of other resources), you should use the right tools that will give you visibility into your architecture, allow you to troubleshoot issues quickly, and will identify serverless specific issues (timeouts, out of memory, ..).

One thing you can try out is to stream out all your logs from CloudWatch to an external service such as ELK. It will allow you to easily explore them.

Otherwise, I'm recommending to use a dedicated solution for serverless - there are several out there (our own Epsagon , IOpipe, Dashbird).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM