简体   繁体   English

AWS Firehose跨区域/帐户策略

[英]AWS Firehose cross region/account policy

I am trying to create Firehose streams that can receive data from different regions in Account A, through AWS Lambda, and output into a redshift table in Account B. To do this I created an IAM role on Account A: 我正在尝试创建可以通过AWS Lambda从帐户A中的不同区域接收数据的Firehose流,并输出到帐户B中的红移表。为此,我在帐户A上创建了一个IAM角色:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": "firehose.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

I gave it the following permissions: 我给了它以下权限:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::b-bucket/*",
                "arn:aws:s3:::b-bucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "firehose:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:*"
            ],
            "Resource": "*"
        }
    ]
}

On Account BI created a role with this trust policy: 在帐户BI上,使用此信任策略创建了一个角色:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": "firehose.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "sts:ExternalId": "11111111111"
                }
            }
        }
    ]
}

I gave that role the following access: 我给了该角色以下访问权限:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::b-bucket",
                "arn:aws:s3:::b-bucket/*",
                "arn:aws:s3:::b-account-logs",
                "arn:aws:s3:::b-account-logs/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "firehose:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "redshift:*",
            "Resource": "arn:aws:redshift:us-east-1:cluster:account-b-cluster*"
        }
    ]
}

I also edited the access policy on the S3 buckets to give access to my Account A role: 我还编辑了S3存储桶上的访问策略,以便访问我的帐户A角色:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::11111111111:role/AccountAXAccountBPolicy"
            },
            "Action": "s3:*",
            "Resource": ["arn:aws:s3:::b-bucket","arn:aws:s3:::b-bucket/*"]
        }
    ]
}

However, none of this works. 但是,这一切都不起作用。 When I try to create the the stream in Account A it does not list the buckets in Account B nor the redshift cluster. 当我尝试在帐户A中创建流时,它不会列出帐户B中的存储桶和红移群集。 Is there any way to make this work? 有没有办法让这项工作?

John's answer is semi correct. 约翰的答案是半正确的。 I would recommend that the account owner of the Redshift Cluster creates the FireHose Stream. 我建议Redshift群集的帐户所有者创建FireHose流。 Creating through CLI requires you to supply the user name and password. 通过CLI创建需要您提供用户名和密码。 Having the cluster owner create the stream and sharing IAM Role permissions on the stream is safer for security and in case of credential change. 让集群所有者创建流并在流上共享IAM角色权限对于安全性和凭证更改更安全。 Additionally, you cannot create a stream that accesses a database outside of the region, so have the delivery application access the correct stream and region. 此外,您无法创建访问区域外的数据库的流,因此交付应用程序可以访问正确的流和区域。

Read on to below to see how to create the cross account stream. 请继续阅读下面的内容,了解如何创建跨帐户流。

In my case both accounts are accessible to me and to lower the amount of changes and ease of monitoring I created the stream on Account A side. 在我的情况下,我可以访问这两个帐户并降低更改量和监控的便捷性我在帐户A端创建了流。

The above permissions are right however, you cannot create a Firehose Stream from Account A to Account B through AWS Console. 但是,上述权限是正确的,您无法通过AWS控制台创建从帐户A到帐户B的Firehose流。 You need to do it through AWS Cli: 您需要通过AWS Cli执行此操作:

 aws firehose create-delivery-stream --delivery-stream-name testFirehoseStreamToRedshift 
 --redshift-destination-configuration 'RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole", ClusterJDBCURL="jdbc:redshift://<cluster-url>:<cluster-port>/<>",
 CopyCommand={DataTableName="<schema_name>.x_test",DataTableColumns="ID1,STRING_DATA1",CopyOptions="csv"},Username="<Cluster_User_name>",Password="<Cluster_Password>",S3Configuration={RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole",
 BucketARN="arn:aws:s3:::b-bucket",Prefix="test/",CompressionFormat="UNCOMPRESSED"}'

You can test this by creating a test table on the other AWS Account: 您可以通过在其他AWS账户上创建测试表来测试此问题:

create table test_schema.x_test
(
    ID1 INT8 NOT NULL,
    STRING_DATA1 VARCHAR(10) NOT NULL
)
distkey(ID1)
sortkey(ID1,STRING_DATA1);

You can send test data like this: 您可以发送如下测试数据:

aws firehose put-record  --delivery-stream-name testFirehoseStreamToRedshift --record '{"DATA":"1,\"ABCDEFGHIJ\""}'

This with the permissions configuration above should create the cross account access for you. 使用上述权限配置时,应为您创建跨帐户访问权限。

Documentation: 文档:
Create Stream - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html 创建流 - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html

Put Record - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html 记录 - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html

No. 没有。

Amazon Kinesis Firehose will only output to Amazon S3 buckets and Amazon Redshift clusters in the same region. Amazon Kinesis Firehose仅输出到同一地区的Amazon S3存储桶和Amazon Redshift集群。

However, anything can send information to Kinesis Firehose by simply calling the appropriate endpoint. 但是,只需调用适当的端点, 任何东西都可以向Kinesis Firehose发送信息 So, you could have applications in any AWS Account and in any Region (or anywhere on the Internet) send data to the Firehose and then have it stored in a bucket or cluster in the same region as the Firehose. 因此,您可以在任何AWS账户和任何地区(或Internet上的任何地方)将应用程序发送到Firehose,然后将其存储在与Firehose相同的区域中的存储桶或群集中。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM