简体   繁体   English

将 AWS CloudWatch 日志组流式传输到多个 AWS Elasticsearch 服务

[英]Stream AWS CloudWatch Log Group to Multiple AWS Elasticsearch Services

Is there a way to stream an AWS Log Group to multiple Elasticsearch Services or Lambda functions?有没有办法将 AWS 日志组流式传输到多个 Elasticsearch 服务或 Lambda 函数?

AWS only seems to allow one ES or Lambda, and I've tried everything at this point. AWS 似乎只允许一个 ES 或 Lambda,此时我已经尝试了所有方法。 I've even removed the ES subscription service for the Log Group, created individual Lambda functions, created the CloudWatch Log Trigger, and I can only apply the same CloudWatch Log trigger on one Lambda function.我什至删除了日志组的 ES 订阅服务,创建了单独的 Lambda 函数,创建了 CloudWatch Log 触发器,并且我只能在一个 Lambda 函数上应用相同的 CloudWatch Log 触发器。

Here is what I'm trying to accomplish:这是我想要完成的:

CloudWatch Log Group ABC -> No Filter -> Elasticsearch Service #1 CloudWatch 日志组 ABC -> 无过滤器 -> Elasticsearch Service #1

CloudWatch Log Group ABC -> Filter: "XYZ" -> Elasticsearch Service #2 CloudWatch 日志组 ABC -> 过滤器:“XYZ” -> Elasticsearch Service #2

Basically, I need one ES cluster to store all logs, and another to only have a subset of filtered logs.基本上,我需要一个 ES 集群来存储所有日志,而另一个只需要一个过滤日志的子集。

Is this possible?这可能吗?

I've ran into this limitation as well.我也遇到了这个限制。 I have two Lambda's (doing different things) that need to subscribe to the same CloudWatch Log Group.我有两个 Lambda(做不同的事情)需要订阅同一个 CloudWatch 日志组。

What I ended up using is to create one Lambda that subscribes to the Log Group and then proxy the events into an SNS topic.我最终使用的是创建一个订阅日志组的 Lambda,然后将事件代理到 SNS 主题中。

Those two Lambdas are now subscribed to the SNS topic instead of the Log Group.这两个 Lambda 现在订阅了 SNS 主题而不是日志组。

For filtering events, you could implement them inside the Lambda.对于过滤事件,您可以在 Lambda 中实现它们。

It's not a perfect solution but it's a functioning workaround until AWS allows multiple Lambdas to subscribe to the same CloudWatch Log Group.这不是一个完美的解决方案,但在 AWS 允许多个 Lambda 订阅同一个 CloudWatch 日志组之前,它是一种有效的解决方法。

Seems like AWS console limitation,看起来像 AWS 控制台限制,

You can do it via command line,你可以通过命令行来完成,

aws logs put-subscription-filter \
    --log-group-name /aws/lambda/testfunc \
    --filter-name filter1 \
    --filter-pattern "Error" \
    --destination-arn arn:aws:lambda:us-east-1:<ACCOUNT_NUMBER>:function:SendToKinesis

You also need to add permissions as well.您还需要添加权限。

Full detailed instructions,完整详细的说明,

http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html

Hope it helps.希望它有帮助。

I was able to resolve the issue using a bit of a workaround through the Lambda function and also using the response provided by Kannaiyan.我能够通过 Lambda 函数使用一些解决方法并使用 Kannaiyan 提供的响应来解决该问题。

I created the subscription to ES via the console, and then unsubscribed, and modified the Lambda function default code.我通过控制台创建了对ES的订阅,然后取消订阅,修改了Lambda函数默认代码。

I declared two Elasticsearch endpoints:我声明了两个 Elasticsearch 端点:

var endpoint1 = '<ELASTICSEARCH ENDPOINT 1>';
var endpoint2 = '<ELASTICSEARCH ENDPOINT 2>';

Then, declared an array named "endpoint" with the contents of endpoint1 and endpoint2:然后,声明一个名为“端点”的数组,其中包含端点 1 和端点 2 的内容:

var endpoint = [endpoint1, endpoint2];

I modified the "post" function which calls the "buildRequest" function that then references "endpoint"...我修改了“post”函数,它调用“buildRequest”函数,然后引用“endpoint”......

function post(body, callback) {
  for (index = 0; index < endpoint.length; ++index) {
    var requestParams = buildRequest(endpoint[index], body);
...

So every time the "post" function is called it cycles through the array of endpoints .因此,每次调用“post”函数时,它都会循环遍历endpoints数组。

Then, I modified the buildRequest function that is in charge of building the request.然后,我修改了负责构建请求的 buildRequest 函数。 This function by default calls the endpoint variable, but since the "post" function cycles through the array, I renamed "endpoint" to "endpoint_xy" to make sure its not calling the global variable and instead takes the variable being inputted into the function:该函数默认调用端点变量,但由于“post”函数循环遍历数组,我将“端点”重命名为“endpoint_xy”以确保它不调用全局变量,而是将变量输入到函数中:

function buildRequest(endpoint_xy, body) {
  var endpointParts = endpoint_xy.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
...

Finally, I used the response provided by Kannaiyan on using the AWS CLI to implement the subscription to the logs, but corrected a few variables:最后,我使用 Kannaiyan 提供的关于使用 AWS CLI 实现对日志的订阅的响应,但更正了一些变量:

aws logs put-subscription-filter \
--log-group-name <LOG GROUP NAME> \
--filter-name <FILTER NAME> 
--filter-pattern <FILTER PATTERN> 
--destination-arn <LAMBDA FUNCTION ARN>

I kept the filters completely open for now, but will now code the filter directly into the Lambda function like dashmug suggested.我暂时保持过滤器完全打开,但现在将过滤器直接编码到 Lambda 函数中,就像 dashmug 建议的那样。 At least I can split one log to two ES clusters.至少我可以将一个日志拆分为两个 ES 集群。

Thank you everyone!谢谢大家!

As of September 2020 CloudWatch now allows two subscriptions to a single CloudWatch Log Group, as well as multiple Metric filters for a single Log Group.截至 2020 年 9 月,CloudWatch 现在允许对单个 CloudWatch 日志组进行两次订阅,并允许对单个日志组使用多个指标筛选器。

Update: AWS posted October 2, 2020, on their "What's New" blog that " Amazon CloudWatch Logs now supports two subscription filters per log group ".更新: AWS 于 2020 年 10 月 2 日在其“最新消息”博客上发布了“ Amazon CloudWatch Logs 现在支持每个日志组两个订阅过滤器”。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 为什么无法更改AWS CloudWatch日志组和流名称? - Why AWS CloudWatch Log Group and Stream Name cannot be changed? AWS Cloudwatch 将 Stream 记录到 Amazon Elasticsearch 服务 - AWS Cloudwatch Logs Stream to Amazon Elasticsearch Service AWS Cloudwatch日志流名称无法识别 - AWS Cloudwatch log stream name not recognised Terraform 导入 aws_cloudwatch_log_stream - Terraform import aws_cloudwatch_log_stream AWS CloudWatch - 日志组不存在 - AWS CloudWatch - Log group does not exist 是否可以重命名AWS CloudWatch日志组? - Is it possible to rename an AWS CloudWatch Log Group? AWS CloudWatch Logs Stream - 如何配置 awslogs 以将每天的新日志 stream 从同一实例写入同一日志组? - AWS CloudWatch Logs Stream - how configure awslogs to write every day new log stream to the same log group from the same instance? 适用于Elasticbeanstalk的AWS CloudWatch日志 - aws cloudwatch log for elasticbeanstalk 如何从多个 beanstalk 应用程序实例登录一个 aws cloudwatch 流 - How to log into one aws cloudwatch stream from multiple beanstalk application instances 如何使用适用于 .NET 的 AWS 开发工具包将多个日志发送到单个 CloudWatch 日志流? - How can I send multiple logs to a single CloudWatch log stream using the AWS SDK for .NET?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM