简体   繁体   English

用于连接 AWS Cloudwatch 日志、Kinesis Firehose、S3 和 ElasticSearch 的 AWS IAM 策略

[英]AWS IAM Policies to connect AWS Cloudwatch Logs, Kinesis Firehose, S3 and ElasticSearch

I am trying to stream the AWS cloudwatch logs to ES via Kinesis Firehose.我正在尝试通过 Kinesis Firehose 将 AWS cloudwatch 日志流式传输到 ES。 Below terraform code is giving an error.下面的 terraform 代码给出了一个错误。 Any suggestions.. The error is:任何建议..错误是:

  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: 1 error(s) occurred: aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter:发生 1 个错误:
  • aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: InvalidParameterException: Could not deliver test message to specified Firehose stream. aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter:InvalidParameterException:无法将测试消息传送到指定的 Firehose 流。 Check if the given Firehose stream is in ACTIVE state.检查给定的 Firehose 流是否处于 ACTIVE 状态。

 resource "aws_s3_bucket" "bucket" { bucket = "cw-kinesis-es-bucket" acl = "private" } resource "aws_iam_role" "firehose_role" { name = "firehose_test_role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "firehose.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_elasticsearch_domain" "es" { domain_name = "firehose-es-test" elasticsearch_version = "1.5" cluster_config { instance_type = "t2.micro.elasticsearch" } ebs_options { ebs_enabled = true volume_size = 10 } advanced_options { "rest.action.multi.allow_explicit_index" = "true" } access_policies = <<CONFIG { "Version": "2012-10-17", "Statement": [ { "Action": "es:*", "Principal": "*", "Effect": "Allow", "Condition": { "IpAddress": {"aws:SourceIp": ["xxxxx"]} } } ] } CONFIG snapshot_options { automated_snapshot_start_hour = 23 } tags { Domain = "TestDomain" } } resource "aws_kinesis_firehose_delivery_stream" "test_stream" { name = "terraform-kinesis-firehose-test-stream" destination = "elasticsearch" s3_configuration { role_arn = "${aws_iam_role.firehose_role.arn}" bucket_arn = "${aws_s3_bucket.bucket.arn}" buffer_size = 10 buffer_interval = 400 compression_format = "GZIP" } elasticsearch_configuration { domain_arn = "${aws_elasticsearch_domain.es.arn}" role_arn = "${aws_iam_role.firehose_role.arn}" index_name = "test" type_name = "test" } } resource "aws_iam_role" "iam_for_lambda" { name = "iam_for_lambda" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF } resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" { name = "test_kinesis_logfilter" role_arn = "${aws_iam_role.iam_for_lambda.arn}" log_group_name = "loggorup.log" filter_pattern = "" destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}" }

In this configuration you are directing Cloudwatch Logs to send log records to Kinesis Firehose, which is in turn configured to write the data it receives to both S3 and ElasticSearch.在此配置中,您将指示 Cloudwatch Logs 将日志记录发送到 Kinesis Firehose,后者被配置为将收到的数据写入 S3 和 ElasticSearch。 Thus the AWS services you are using are talking to each other as follows:因此,您正在使用的 AWS 服务正在相互通信,如下所示:

Cloudwatch Logs 与 Kinesis Firehose 对话,后者又与 S3 和 ElasticSearch 对话

In order for one AWS service to talk to another the first service must assume a role that grants it access to do so.为了让一个 AWS 服务与另一个服务通信,第一个服务必须承担一个角色,授予它这样做的访问权限。 In IAM terminology, "assuming a role" means to temporarily act with the privileges granted to that role.在 IAM 术语中,“承担角色”意味着暂时使用授予该角色的权限进行操作。 An AWS IAM role has two key parts: AWS IAM 角色有两个关键部分:

  • The assume role policy , that controls which services and/or users may assume the role.承担角色策略,控制哪些服务和/或用户可以承担角色。
  • The policies controlling what the role grants access to.控制角色授予访问权限的策略。 This decides what a service or user can do once it has assumed the role.这决定了服务或用户在承担角色后可以做什么。

Two separate roles are needed here.这里需要两个独立的角色。 One role will grant Cloudwatch Logs access to talk to Kinesis Firehose, while the second will grant Kinesis Firehose access to talk to both S3 and ElasticSearch.一个角色将授予 Cloudwatch Logs 与 Kinesis Firehose 对话的权限,而第二个角色将授予 Kinesis Firehose 与 S3 和 ElasticSearch 对话的权限。

For the rest of this answer, I will assume that Terraform is running as a user with full administrative access to an AWS account.对于本答案的其余部分,我将假设 Terraform 以对 AWS 账户具有完全管理访问权限的用户身份运行。 If this is not true, it will first be necessary to ensure that Terraform is running as an IAM principal that has access to create and pass roles.如果情况并非如此,则首先需要确保 Terraform 作为有权创建和传递角色的 IAM 委托人运行。


Access for Cloudwatch Logs to Kinesis Firehose将 Cloudwatch 日志访问到 Kinesis Firehose

In the example given in the question, the aws_cloudwatch_log_subscription_filter has a role_arn whose assume_role_policy is for AWS Lambda, so Cloudwatch Logs does not have access to assume this role.在问题中给出的示例中, aws_cloudwatch_log_subscription_filter有一个role_arn它的assume_role_policy用于 AWS Lambda,因此 Cloudwatch Logs 无权承担此角色。

To fix this, the assume role policy can be changed to use the service name for Cloudwatch Logs:要解决此问题,可以更改代入角色策略以使用 Cloudwatch 日志的服务名称:

resource "aws_iam_role" "cloudwatch_logs" {
  name = "cloudwatch_logs_to_firehose"
  assume_role_policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {
          "Service": "logs.us-east-1.amazonaws.com"
        },
        "Effect": "Allow",
        "Sid": "",
      },
    ],
  })
}

The above permits the Cloudwatch Logs service to assume the role.以上允许 Cloudwatch Logs 服务承担该角色。 Now the role needs an access policy that permits writing to the Firehose Delivery Stream:现在该角色需要一个允许写入 Firehose Delivery Stream 的访问策略:

resource "aws_iam_role_policy" "cloudwatch_logs" {
  role = aws_iam_role.cloudwatch_logs.name

  policy = jsonencode({
    "Statement": [
      {
        "Effect": "Allow",
        "Action": ["firehose:*"],
        "Resource": [aws_kinesis_firehose_delivery_stream.test_stream.arn],
      },
    ],
  })
}

The above grants the Cloudwatch Logs service access to call into any Kinesis Firehose action as long as it targets the specific delivery stream created by this Terraform configuration.上述内容授予 Cloudwatch Logs 服务访问权限以调用任何Kinesis Firehose 操作,只要它针对由此 Terraform 配置创建的特定传输流即可。 This is more access than is strictly necessary;这比严格必要的访问要多; for more information, see Actions and Condition Context Keys for Amazon Kinesis Firehose .有关更多信息,请参阅Amazon Kinesis Firehose 的操作和条件上下文键

To complete this, the aws_cloudwatch_log_subscription_filter resource must be updated to refer to this new role:要完成此操作,必须更新aws_cloudwatch_log_subscription_filter资源以引用此新角色:

resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
  name            = "test_kinesis_logfilter"
  role_arn        = aws_iam_role.cloudwatch_logs.arn
  log_group_name  = "loggorup.log"
  filter_pattern  = ""
  destination_arn = aws_kinesis_firehose_delivery_stream.test_stream.arn

  # Wait until the role has required access before creating
  depends_on = aws_iam_role_policy.cloudwatch_logs
}

Unfortunately due to the internal design of AWS IAM, it can often take several minutes for a policy change to come into effect after Terraform submits it, so sometimes a policy-related error will occur when trying to create a new resource using a policy very soon after the policy itself was created.不幸的是,由于 AWS IAM 的内部设计,在 Terraform 提交后,策略更改通常需要几分钟才能生效,因此有时在尝试使用策略很快创建新资源时会发生与策略相关的错误在创建策略本身之后。 In this case, it's often sufficient to simply wait 10 minutes and then run Terraform again, at which point it should resume where it left off and retry creating the resource.在这种情况下,通常只需等待 10 分钟然后再次运行 Terraform,此时它应该从停止的地方继续并重新尝试创建资源。


Access for Kinesis Firehose to S3 and Amazon ElasticSearch Kinesis Firehose 对 S3 和 Amazon ElasticSearch 的访问

The example given in the question already has an IAM role with a suitable assume role policy for Kinesis Firehose:问题中给出的示例已经有一个 IAM 角色,并为 Kinesis Firehose 设置了合适的代入角色策略:

resource "aws_iam_role" "firehose_role" {
  name = "firehose_test_role"

  assume_role_policy = jsonencode({
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {
          "Service": "firehose.amazonaws.com"
        },
        "Effect": "Allow",
        "Sid": ""
      }
    ]
  })
}

The above grants Kinesis Firehose access to assume this role.以上授予 Kinesis Firehose 访问权限以承担此角色。 As before, this role also needs an access policy to grant users of the role access to the target S3 bucket:和以前一样,这个角色还需要一个访问策略来授予角色用户访问目标 S3 存储桶的权限:

resource "aws_iam_role_policy" "firehose_role" {
  role = aws_iam_role.firehose_role.name

  policy = jsonencode({
    "Statement": [
      {
        "Effect": "Allow",
        "Action": ["s3:*"],
        "Resource": [aws_s3_bucket.bucket.arn]
      },
      {
        "Effect": "Allow",
        "Action": ["es:ESHttpGet"],
        "Resource": ["${aws_elasticsearch_domain.es.arn}/*"]
      },
      {
        "Effect": "Allow",
        "Action": [
            "logs:PutLogEvents"
        ],
        "Resource": [
            "arn:aws:logs:*:*:log-group:*:log-stream:*"
        ]
      },
    ],
  })
}

The above policy allows Kinesis Firehose to perform any action on the created S3 bucket, any action on the created ElasticSearch domain, and to write log events into any log stream in Cloudwatch Logs.上述策略允许 Kinesis Firehose 对创建的 S3 存储桶执行任何操作,对创建的 ElasticSearch 域执行任何操作,并将日志事件写入 Cloudwatch Logs 中的任何日志流。 The final part of this is not strictly necessary, but is important if logging is enabled for the Firehose Delivery Stream, or else Kinesis Firehose is unable to write logs back to Cloudwatch Logs.最后一部分不是绝对必要的,但如果为 Firehose Delivery Stream 启用日志记录,则很重要,否则 Kinesis Firehose 无法将日志写回 Cloudwatch Logs。

Again, this is more access than strictly necessary.同样,这比绝对必要的访问更多。 For more information on the specific actions supported, see the following references:有关支持的特定操作的更多信息,请参阅以下参考资料:

Since this single role has access to write to both S3 and to ElasticSearch, it can be specified for both of these delivery configurations in the Kinesis Firehose delivery stream:由于这个单一角色有权写入 S3 和 ElasticSearch,因此可以在 Kinesis Firehose 传输流中为这两种传输配置指定它:

resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
  name        = "terraform-kinesis-firehose-test-stream"
  destination = "elasticsearch"

  s3_configuration {
    role_arn           = aws_iam_role.firehose_role.arn
    bucket_arn         = aws_s3_bucket.bucket.arn
    buffer_size        = 10
    buffer_interval    = 400
    compression_format = "GZIP"
  }

  elasticsearch_configuration {
    domain_arn = aws_elasticsearch_domain.es.arn
    role_arn   = aws_iam_role.firehose_role.arn
    index_name = "test"
    type_name  = "test"
  }

  # Wait until access has been granted before creating the firehose
  # delivery stream.
  depends_on = [aws_iam_role_policy.firehose_role]
}

With all of the above wiring complete, the services should have the access they need to connect the parts of this delivery pipeline.完成上述所有布线后,服务应该具有连接此交付管道部分所需的访问权限。

This same general pattern applies to any connection between two AWS services.这种相同的通用模式适用于两个 AWS 服务之间的任何连接。 The important information needed for each case is:每个案例所需的重要信息是:

  • The service name for the service that will initiate the requests, such as logs.us-east-1.amazonaws.com or firehose.amazonaws.com .将启动请求的服务的服务名称,例如logs.us-east-1.amazonaws.comfirehose.amazonaws.com These are unfortunately generally poorly documented and hard to find, but can usually be found in policy examples within each service's user guide.不幸的是,这些通常记录不充分且很难找到,但通常可以在每个服务的用户指南中的策略示例中找到。
  • The names of the actions that need to be granted.需要授予的操作的名称。 The full set of actions for each service can be found in AWS Service Actions and Condition Context Keys for Use in IAM Policies .每个服务的完整操作集可以在AWS 服务操作和 IAM 策略中使用的条件上下文键中找到 Unfortunately again the documentation for specifically which actions are required for a given service-to-service integration is generally rather lacking, but in simple environments (notwithstanding any hard regulatory requirements or organizational policies around access) it usually suffices to grant access to all actions for a given service, using the wildcard syntax used in the above examples.不幸的是再次要求对给定服务到服务的一体化行动的具体文件一般是相当缺乏,但在简单的环境中(尽管有任何硬的监管要求或周围的访问组织的政策),它通常足以授予访问所有行动的使用上述示例中使用的通配符语法的给定服务。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM