简体   繁体   English

当权限为 s3 时,S3 存储桶的 ListObjects 的访问被拒绝:*

[英]AccessDenied for ListObjects for S3 bucket when permissions are s3:*

I am getting:我正进入(状态:

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied调用 ListObjects 操作时发生错误(AccessDenied):Access Denied

When I try to get folder from my S3 bucket.当我尝试从我的 S3 存储桶中获取文件夹时。

Using this command:使用此命令:

aws s3 cp s3://bucket-name/data/all-data/ . --recursive

The IAM permissions for the bucket look like this:存储桶的 IAM 权限如下所示:

{
"Version": "version_id",
"Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname/*"
        ]
    }
] }

What do I need to change to be able to copy and ls successfully?我需要更改什么才能成功copyls

You have given permission to perform commands on objects inside the S3 bucket, but you have not given permission to perform any actions on the bucket itself.您已授予对 S3 存储桶内的对象执行命令的权限,但您未授予对存储桶本身执行任何操作的权限。

Slightly modifying your policy would look like this:稍微修改您的策略将如下所示:

{
  "Version": "version_id",
  "Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname",
            "arn:aws:s3:::bucketname/*"
        ]
    }
  ] 
}

However, that probably gives more permission than is needed.但是,这可能会提供比所需更多的权限。 Following the AWS IAM best practice of Granting Least Privilege would look something like this:遵循授予最小权限的 AWS IAM 最佳实践如下所示:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname"
          ]
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname/*"
          ]
      }
  ]
}

If you wanted to copy all s3 bucket objects using the command "aws s3 cp s3://bucket-name/data/all-data/ . --recursive" as you mentioned, here is a safe and minimal policy to do that:如果您想使用命令“aws s3 cp s3://bucket-name/data/all-data/.--recursive”复制所有 s3 存储桶对象,如您所述,这是一个安全且最少的策略:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name"
          ],
          "Condition": {
              "StringLike": {
                  "s3:prefix": "data/all-data/*"
              }
          }
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name/data/all-data/*"
          ]
      }
  ]
}

The first statement in this policy allows for listing objects inside a specific bucket's sub directory.此策略中的第一条语句允许在特定存储桶的子目录中列出对象。 The resource needs to be the arn of the S3 bucket, and to limit listing to only a sub-directory in that bucket you can edit the "s3:prefix" value.该资源需要是 S3 存储桶的 arn,并且要将列表限制在该存储桶中的子目录中,您可以编辑“s3:prefix”值。

The second statement in this policy allows for getting objects inside of the bucket at a specific sub-directory.此策略中的第二条语句允许在特定子目录的存储桶中获取对象。 This means that anything inside the "s3://bucket-name/data/all-data/" path you will be able to copy.这意味着您可以复制“s3://bucket-name/data/all-data/”路径中的任何内容。 Be aware that this doesn't allow you to copy from parent paths such as "s3://bucket-name/data/".请注意,这不允许您从父路径复制,例如“s3://bucket-name/data/”。

This solution is specific to limiting use for AWS CLI commands;此解决方案专门用于限制 AWS CLI 命令的使用; if you need to limit S3 access through the AWS console or API, then more policies will be needed.如果您需要通过 AWS 控制台或 API 限制 S3 访问,则需要更多策略。 I suggest taking a look here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/ .我建议在这里看看: https ://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/。

A similar issue to this can be found here which led me to the solution I am giving.可以在此处找到与此类似的问题,这导致了我给出的解决方案。 https://github.com/aws/aws-cli/issues/2408 https://github.com/aws/aws-cli/issues/2408

Hope this helps!希望这可以帮助!

I was unable to access to S3 because我无法访问 S3,因为

  • first I configured key access on the instance (it was impossible to attach role after the launch then)首先我在实例上配置了密钥访问(然后启动后无法附加角色)
  • forgot about it for a few months忘记了几个月
  • attached role to instance附加角色到实例
  • tried to access.试图访问。 The configured key had higher priority than role, and access was denied because the user wasn't granted with necessary S3 permissions.配置的密钥具有比角色更高的优先级,并且访问被拒绝,因为用户没有被授予必要的 S3 权限。

Solution: rm -rf .aws/credentials , then aws uses role.解决方案: rm -rf .aws/credentials ,然后aws使用角色。

I got the same error when using policy as below, although i have "s3:ListBucket" for s3:ListObjects operation.我在使用以下策略时遇到了同样的错误,尽管我对 s3:ListObjects 操作有“s3:ListBucket”。

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
  ]
 }

Then i fixed it by adding one line "arn:aws:s3:::bucketname"然后我通过添加一行“arn:aws:s3:::bucketname”来修复它

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
             "arn:aws:s3:::<bucketname>",
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
 ]
}

I tried the following:我尝试了以下方法:

aws s3 ls s3.console.aws.amazon.com/s3/buckets/{bucket name}

This gave me the error:这给了我错误:

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

Using this form worked:使用此表格有效:

aws s3 ls {bucket name}

I faced with the same issue.我面临同样的问题。 I just added credentials config:我刚刚添加了凭据配置:

aws_access_key_id = your_aws_access_key_id
aws_secret_access_key = your_aws_secret_access_key

into "~/.aws/credentials" + restart terminal for default profile.进入“~/.aws/credentials” + 重启终端以获取默认配置文件。

In the case of multi profiles --profile arg needs to be added:在多配置文件的情况下,需要添加--profile arg:

aws s3 sync ./localDir s3://bucketName --profile=${PROFILE_NAME}

where PROFILE_NAME :其中PROFILE_NAME

.bash_profile ( or .bashrc) -> export PROFILE_NAME="yourProfileName"

More info about how to config credentials and multi profiles can be found here有关如何配置凭据和多配置文件的更多信息,请参见此处

You have to specify Resource for the bucket via "arn:aws:s3:::bucketname" or "arn:aws:3:::bucketname*" .您必须通过"arn:aws:s3:::bucketname""arn:aws:3:::bucketname*"为存储桶指定资源。 The latter is preferred since it allows manipulations on the bucket's objects too.后者是首选,因为它也允许对存储桶的对象进行操作。 Notice there is no slash!注意没有斜线!

Listing objects is an operation on Bucket.列出对象是对 Bucket 的操作。 Therefore, action "s3:ListBucket" is required.因此,需要操作"s3:ListBucket" Adding an object to the Bucket is an operation on Object.将对象添加到 Bucket 是对 Object 的操作。 Therefore, action "s3:PutObject" is needed.因此,需要操作"s3:PutObject" Certainly, you may want to add other actions as you require.当然,您可能希望根据需要添加其他操作。

{
"Version": "version_id",
"Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:PutObject"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname*"
        ]
    }
] 
}

我认为错误是由于“s3:ListObjects”操作造成的,但我必须添加操作“s3:ListBucket”来解决“S3 存储桶的 ListObjects 的 AccessDenied”问题

I'm adding an answer with the same direction as the accepted answer but with small (important) differences and adding more details.我添加的答案与接受的答案方向相同,但差异很小(重要),并添加了更多细节。

Consider the configuration below:考虑以下配置:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::<Bucket-Name>"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::<Bucket-Name>/*"]
    }
  ]
}

The policy grants programmatic write-delete access and is separated into two parts:该策略授予编程写入-删除访问权限,分为两部分:
The ListBucket action provides permissions on the bucket level and the other PutObject/DeleteObject actions require permissions on the objects inside the bucket. ListBucket操作提供存储桶级别的权限,而其他PutObject/DeleteObject操作需要对存储桶内的对象的权限。

The first Resource element specifies arn:aws:s3:::<Bucket-Name> for the ListBucket action so that applications can list all objects in the bucket.第一个 Resource 元素为ListBucket操作指定arn:aws:s3:::<Bucket-Name> ,以便应用程序可以列出存储桶中的所有对象。

The second Resource element specifies arn:aws:s3:::<Bucket-Name>/* for the PutObject , and DeletObject actions so that applications can write or delete any objects in the bucket.第二个 Resource 元素为PutObjectDeletObject操作指定arn:aws:s3:::<Bucket-Name>/* ,以便应用程序可以写入或删除存储桶中的任何对象。

The separation into two different 'arns' is important from security reasons in order to specify bucket-level and object-level fine grained permissions.出于安全原因,为了指定存储桶级别和对象级别的细粒度权限,将其分成两个不同的“arns”很重要。

Notice that if I would have specified just GetObject in the 2nd block what would happen is that in cases of programmatic access I would receive an error like:请注意,如果我在第二个块中只指定GetObject会发生什么,在编程访问的情况下,我会收到如下错误:

Upload failed: <file-name> to <bucket-name>:<path-in-bucket> An error occurred (AccessDenied) when calling the PutObject operation: Access Denied . Upload failed: <file-name> to <bucket-name>:<path-in-bucket> An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Here's the policy that worked for me.这是对我有用的政策。

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::bucket-name"
      ]
    },
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::bucket-name/*"
      ]
    }
  ]
}

Okay for those who have done all the above and still getting this issue, try this:好的,对于完成上述所有操作但仍然遇到此问题的人,请尝试以下操作:

Bucket Policy should look like this:存储桶策略应如下所示:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowBucketSync",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:PutObjectAcl",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME",
                "arn:aws:s3:::BUCKET_NAME/*"
            ]
        }
    ]
}

Then save and ensure your Instance or Lightsail is connected to the right profile on AWS Configure.然后保存并确保您的实例或 Lightsail 已连接到 AWS Configure 上的正确配置文件。

First: try adding --recursive at the end, any luck?首先:尝试在最后添加--recursive ,运气好吗? No okay try the one below.没关系,试试下面的那个。

Second: Okay now try this instead: --no-sign-request第二:好的,现在试试这个: --no-sign-request

so it should look like this:所以它应该是这样的:

sudo aws s3 sync s3://BUCKET_NAME /yourpath/path/folder --no-sign-request

You're welcome 😂不客气😂

For Amazon users who have enabled MFA, please use this: aws s3 ls s3://bucket-name --profile mfa .对于已启用 MFA 的亚马逊用户,请使用: aws s3 ls s3://bucket-name --profile mfa

And prepare the profile mfa first by running aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600 .并首先通过运行aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600准备配置文件mfa (replace 123456789012, user-name and 928371). (替换 123456789012、用户名和 928371)。 在此处输入图像描述

Ran into a similar issues, for me the problem was that I had different AWS keys set in my bash_profile.遇到类似的问题,对我来说,问题是我在 bash_profile 中设置了不同的 AWS 密钥。

I answered a similar question here: https://stackoverflow.com/a/57317494/11871462我在这里回答了一个类似的问题: https ://stackoverflow.com/a/57317494/11871462

If you have conflicting AWS keys in your bash_profile, AWS CLI defaults to these instead.如果您的 bash_profile 中有冲突的 AWS 密钥,AWS CLI 会默认使用这些密钥。

I had this issue my requirement i wanted to allow user to write to specific path我有这个问题我的要求我想允许用户写入特定路径

{
            "Sid": "raspiiotallowspecificBucket",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<bucketname>/scripts",
                "arn:aws:s3:::<bucketname>/scripts/*"
            ]
        },

and problem was solved with this change这个改变解决了问题

{
            "Sid": "raspiiotallowspecificBucket",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::<bucketname>",
                "arn:aws:s3:::<bucketname>/*"
            ]
        },

I like this better than any of the previous answers.我比以前的任何答案都更喜欢这个。 It shows how to use the YAML format and lets you use a variable to specify the bucket.它展示了如何使用 YAML 格式并允许您使用变量来指定存储桶。

    - PolicyName: "AllowIncomingBucket"
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Action: "s3:*"
            Resource:
              - !Ref S3BucketArn
              - !Join ["/", [!Ref S3BucketArn, '*']]

要允许 s3 存储桶中的权限,请转到 s3 存储桶中的权限选项卡,然后在存储桶策略中将操作更改为此将允许执行所有操作:

"Action":"*"

My issue was having set我的问题是设置

env: 
  AWS_ACCESS_KEY_ID: {{ secrets.AWS_ACCESS_KEY_ID }} 
  AWS_SECRET_ACCESS_KEY: {{ secrets.AWS_SECRET_ACCESS_KEY }}

again, under the aws-sync GitHub Action as environment variables.再次,在 aws-sync GitHub Action 下作为环境变量。 They were coming from my GitHub settings.它们来自我的 GitHub 设置。 Though in my case I had assumed a role in the previous step which would set me some new keys into those same name environment variables.虽然在我的情况下,我在上一步中扮演了一个角色,这将为我设置一些新的键到那些同名的环境变量中。 So i was overwriting the good assumed keys with the bad GitHub basic keys.所以我用坏的 GitHub 基本密钥覆盖了好的假设密钥。

Please take care of this if you're assuming roles.如果您担任角色,请注意这一点。

I had the same issue.我遇到过同样的问题。 I had to provide the right resource and action, resource is your bucket's arn and action in your desired permission.我必须提供正确的资源和操作,资源是您所需权限中的存储桶的 arn 和操作。 Also please ensure you have your right user arn.另请确保您拥有正确的用户 arn。 Below is my solution.下面是我的解决方案。

{
    "Version": "2012-10-17",
    "Id": "Policy1546414123454",
    "Statement": [
        {
            "Sid": "Stmt1546414471931",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789101:root"
            },
            "Action": ["s3:ListBucket", "s3:ListBucketVersions"],
            "Resource": "arn:aws:s3:::bucket-name"
        }
    ]
}

I had a similar problem while trying to sync an entire s3 bucket locally.我在尝试本地同步整个 s3 存储桶时遇到了类似的问题。 For me MFA (Multi-factor authentication) was enforced on my account, which is required while making commands via AWS CLI.对我来说,在我的账户上实施了 MFA(多因素身份验证),这是通过 AWS CLI 发出命令时所必需的。

So the solution for me was - provide mfa credentials using a profile ( mfa documentation ) while using any AWS CLI commands.所以我的解决方案是 - 在使用任何 AWS CLI 命令时使用配置文件( mfa 文档)提供 mfa 凭证。

If you are suddenly getting this error on a new version of minio on buckets that used to work, the reason is that bucket access policy defaults were changed from version 2021 to 2022. Now in version 2022 by default all buckets (both newly created and existing ones) have Access Policy set to Private - it is not sufficient to provide server credentials to access them - you will still get errors such as these (here: returned to the python minio client):如果您在以前可以工作的存储桶上的新版本minio上突然出现此错误,原因是存储桶访问策略默认值已从版本 2021 更改为 2022。现在默认情况下在 2022 版本中所有存储桶(新创建的和现有的)那些)将Access Policy设置为Private - 提供服务器凭据来访问它们是不够的 - 您仍然会收到诸如此类的错误(此处:返回到 python minio客户端):

S3Error: S3 operation failed; code: AccessDenied, message: Access Denied., resource: /dicts, request_id: 16FCBE6EC0E70439, host_id: 61486e5a-20be-42fc-bd5b-7f2093494367, bucket_name: dicts

To roll back to the previous security settings in version 2022, the quickest method is to change the bucket access Access Policy back to Public in the MinIO console (or via mc client).要回滚到 2022 版本之前的安全设置,最快的方法是在 MinIO 控制台(或通过mc客户端)将存储桶访问Access Policy更改回Public

This is not the best practice but this will unblock you.这不是最佳做法,但这会解除对您的阻止。 Make sure for the user that's executing the command, it has the following policy attached to it under it's permission.确保对于执行命令的用户,它在其许可下附加了以下策略。 A. PowerUserAccess B. AmazonS3FullAccess A. PowerUserAccess B. AmazonS3FullAccess

在此处输入图像描述

在此处输入图像描述

I had faced same error "An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied"我遇到了同样的错误“调用 ListObjectsV2 操作时发生错误(AccessDenied):拒绝访问”

Note: Bucket policy not a good solution.注意:桶策略不是一个好的解决方案。 In IAM service create new custom policy attached with respective user would be safer.在 IAM 服务中创建附加到相应用户的新自定义策略会更安全。

Solved by below procedure:通过以下程序解决:

IAM Service > Policies > Create Policy > select JSON > IAM 服务 > 策略 > 创建策略 > select JSON >

{
"Version": "2012-10-17",
"Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:ListBucketVersions"
        ],
        "Resource": [
            "arn:aws:s3:::<bucket name>"
        ]
    },
    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl",
            "s3:PutObject",
            "s3:DeleteObject",
            "s3:ListBucketMultipartUploads",
            "s3:ListMultipartUploadParts",
            "s3:AbortMultipartUpload",
            "s3:DeleteObjectVersion",
            "s3:GetObjectVersion",
            "s3:PutObjectACL",
            "s3:ListBucketVersions"
        ],
        "Resource": [
            "arn:aws:s3:::<bucketname>/*"
        ]
    }
]

} }

Select Next Tag > Review Policy enter and create policy. Select Next Tag > Review Policy 输入并创建策略。

Select the newly created policy Select the tab 'Policy Usage' in edit window of newly created policy window. Select "Attach" select the user from the list and Save. Select 新创建的策略 Select 编辑 window 新创建的策略 window 中的“策略使用”选项卡。 Select “附加”select 列表中的用户并保存。

Now try in console with bucket name to list the objects, without bucket name it throws same error.现在尝试在控制台中使用存储桶名称列出对象,如果没有存储桶名称,它会抛出相同的错误。

$aws s3 ls $aws s3 ls

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM