简体   繁体   中英

S3 bucket policy restricting to IP CIDR range

I'm attempting to restrict S3 bucket access to EC2 instances that are within a few different subnets:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Principal": {"AWS": "*"},
            "Resource": [
                "arn:aws:s3:::test.bucket",
                "arn:aws:s3:::test.bucket/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "192.168.129.64/27",
                        "192.168.129.96/27",
                        "192.168.128.64/26"
                    ]
                }
            }
        },
        {
            "Effect": "Deny",
            "Action": "s3:*",
            "Principal": {"AWS": "*"},
            "Resource": [
                "arn:aws:s3:::test.bucket",
                "arn:aws:s3:::test.bucket/*"
            ],
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": [
                        "192.168.129.64/27",
                        "192.168.129.96/27",
                        "192.168.128.64/26"
                    ]
                }
            }
        }
    ]
}

I know that there are other problems regarding the specificity of this policy, but I've tried to make it as bare-bones as possible except for the conditions. Unfortunately, trying a simple aws s3 ls s3://test.bucket from an ec2 instance with the IP address of 192.168.129.100 fails with an access denied. This policy has effectively locked me out of the bucket.

I don't know what I'm missing. I've even tried prepending ForAnyValue and ForAllValues to the IpAddress and NotIpAddress conditions.

The IP address specified in the S3 bucket policy refers to the IP address from which the S3 web endpoint receives the request. There are a few things that will affect this.

In your policy, you specified a 192.168.*.* CIDR block, which would be an internal IP address.

  1. If you are executing the aws s3 ls command from outside of the AWS network (for example, on your local computer), then AWS will not see your local 192.168.29.100 IP address. Instead, it will see your publicly facing IP address. Check tools such as What Is My IP Address? to see what yours is. I think AWS has a similar tool, but I cannot find it right now.

If this is the case, you'll need to update your policy with your publicly facing IP address instead of your private one.

  1. If you are executing the aws s3 ls command from inside the AWS network (for example, on an EC2 instance), and your VPC does not have "S3 VPC Endpoints" enabled, then (again), the S3 web endpoint is seeing the public IP address, not your private one. The reason being that the S3 command is exiting your VPC, becoming a public connection, then reaching the S3 endpoint.

If this is the case, you have 2 possible resolutions:

Resolution 1:

Update your policy to include the public IP address instead of your private one. Most likely this will be the public IP address of your NAT if your EC2 instance is in a private subnet, or the public IP address of your EC2 instance if your instance is in a public subnet.

Resolution 2:

Enable "S3 VPC Endpoints" on your VPC. This will create a direct connection between your internal VPC and S3 endpoints, thus bypassing the public internet.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM