简体   繁体   中英

Can't access S3 bucket from within Fargate container (Bad Request and unable to locate credentials)

I created a private s3 bucket and a fargate cluster with a simple task that attempts to read from that bucket using python 3 and boto3 . I've tried this on 2 different docker images and on one I get a ClientError from boto saying HeadObject Bad request (400) and the other I get NoCredentialsError: Unable to locate credentials .

The only real different in the images is that the one saying bad request is being run normally and the other is being run manually by me via ssh to the task container. So I'm not sure why one image is saying "bad request" and the other "unable to locate credentials".

I have tried a couple different IAM policies, including ( terraform ) the following policies:

data "aws_iam_policy_document" "access_s3" {
  statement {
    effect    = "Allow"
    actions   = ["s3:ListBucket"]
    resources = ["arn:aws:s3:::bucket_name"]
  }

  statement {
    effect = "Allow"

    actions = [
      "s3:GetObject",
      "s3:GetObjectVersion",
      "s3:GetObjectTagging",
      "s3:GetObjectVersionTagging",
    ]

    resources = ["arn:aws:s3:::bucket_name/*"]
  }
}

Second try:

data "aws_iam_policy_document" "access_s3" {
  statement {
    effect    = "Allow"
    actions   = ["s3:*"]
    resources = ["arn:aws:s3:::*"]
  }
}

And the final one I tried was a build in policy:

resource "aws_iam_role_policy_attachment" "access_s3" {
  role       = "${aws_iam_role.ecstasks.name}"
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}

the bucket definition is very simple:

resource "aws_s3_bucket" "bucket" {
  bucket = "${var.bucket_name}"
  acl    = "private"
  region = "${var.region}"
}

Code used to access s3 bucket:

try:
    s3 = boto3.client('s3')
    tags = s3.head_object(Bucket='bucket_name', Key='filename')
    print(tags['ResponseMetadata']['HTTPHeaders']['etag'])
except ClientError:
    traceback.print_exc()

No matter what I do, I'm unable to use boto3 to access AWS resources from within a Fargate container task. I'm able to access the same s3 bucket with boto3 on an EC2 instance without providing any kind of credentials and only using the IAM roles/policies. What am I doing wrong? Is it not possible to access AWS resources in the same way from a Fargate container?

Forgot to mention that I am assigning the IAM roles to the task definition execution policy and task policy.

UPDATE : It turns out that the unable to find credentials error I was having is a red herring. The reason I could not get the credentials was because my direct ssh session did not have the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable set.

AWS Fargate will inject an environment variable named AWS_CONTAINER_CREDENTIALS_RELATIVE_URI on your behalf which contains a url to what boto should use for grabbing API access credentials. So the Bad request error is the one I'm actually getting and need help resolving. I checked my environment variables inside the container and the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI value is being set by Fargate.

I struggled quite a bit with this issue and constantly having AWS_CONTAINER_CREDENTIALS_RELATIVE_URI wrongly set to None , until I added a custom task role in addition to my current task execution role .

1) The task execution role is responsible for having access to the container in ECR and giving access to run the task itself, while 2) the task role is responsible for your docker container making API requests to other authorized AWS services.

1) For my task execution role I'm using AmazonECSTaskExecutionRolePolicy with the following JSON;

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}

2) I finally got rid of the NoCredentialsError: Unable to locate credentials when I added a task role in addition to the task execution role, for instance, responsible of reading from a certain bucket;

{
    "Version": "2012-10-17",
    "Statement": [
           {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::bucket_name/*"
        }
    ]
}

In summary; make sure to both have a role for 1) executionRoleArn for access to run the task and 2) taskRoleArn for access to make API requests to authorized AWS services set in your task definition.

To allow Amazon S3 read-only access for your container instance role

Open the IAM console at https://console.aws.amazon.com/iam/ .

In the navigation pane, choose Roles.

Choose the IAM role to use for your container instances (this role is likely titled ecsInstanceRole). For more information, see Amazon ECS Container Instance IAM Role.

Under Managed Policies, choose Attach Policy.

On the Attach Policy page, for Filter, type S3 to narrow the policy results.

Select the box to the left of the AmazonS3ReadOnlyAccess policy and choose Attach Policy.

You should need an IAM Role to access from your ecs-task to your S3 bucket.

resource "aws_iam_role" "AmazonS3ServiceForECSTask" {
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": [
          "ecs-tasks.amazonaws.com"
        ]
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

data "aws_iam_policy_document" "bucket_policy" {
  statement {
    principals {
      type        = "AWS"
      identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
    }

    actions = [
      "s3:ListBucket",
    ]

    resources = [
      "arn:aws:s3:::${var.your_bucket_name}",
    ]
  }
  statement {
    principals {
      type        = "AWS"
      identifiers = [aws_iam_role.AmazonS3ServiceForECSTask.arn]
    }

    actions = [
      "s3:GetObject",
    ]

    resources = [
      "arn:aws:s3:::${var.your_bucket_name}/*",
    ]
  }
}

You should need add your IAM Role in task_role_arn of your task definition.

resource "aws_ecs_task_definition" "_ecs_task_definition" {
  task_role_arn               = aws_iam_role.AmazonS3ServiceForECSTask.arn
  execution_role_arn          = aws_iam_role.ECS-TaskExecution.arn
  family                      = "${var.family}"
  network_mode                = var.network_mode[var.launch_type]
  requires_compatibilities    = var.requires_compatibilities
  cpu                         = var.task_cpu[terraform.workspace]
  memory                      = var.task_memory[terraform.workspace]
  container_definitions       = module.ecs-container-definition.json
}

ECS Fargate task not applying role

After countless hours of digging this parameter finally solved the issue for me:

auto_assign_public_ip = true inside a.network_configuration block on ecs service .

Turns out my tasks ran by these service didn't have IP assigned and thus no connection to outside world.

Boto3 has a credential lookup route:https://boto3.readthedocs.io/en/latest/guide/configuration.html . When you use AWS provided images to create your EC2 instance, the instance pre-install the aws command and other AWS credential environmental variables. However, Fargate is only a container. You need to manually inject AWS credentials to the container. One quick solution is to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to the fargate container.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM