简体   繁体   中英

Bash with AWS CLI - unable to locate credentials

I have a shell script which is supposed to download some files from S3 and mount an ebs drive. However, I always end up with "Unable to locate credentials".

I have specified my credentials with the aws configure command and the commands work outside the shell script. Could somebody, please, tell me (preferably in detail) how to make it work?

This is my script

#!/bin/bash

AWS_CONFIG_FILE="~/.aws/config"

echo $1

sudo mkfs -t ext4 $1
sudo mkdir /s3-backup-test
sudo chmod -R ugo+rw /s3-backup-test
sudo mount $1 /s3-backup-test

sudo aws s3 sync s3://backup-test-s3 /s3-backup/test

du -h /s3-backup-test
ipt (short version):

Thanks for any help!

sudo will change the $HOME directory (and therefore ~ ) to /root, and remove most bash variables like AWS_CONFIG_FILE from the environment. Make sure you do everything with aws as root or as your user, dont mix.

Make sure you did sudo aws configure for example. And try

sudo bash -c 'AWS_CONFIG_FILE=/root/.aws/config aws s3 sync s3://backup-test-s3 /s3-backup/test'

You might prefer to remove all the sudo from inside the script, and just sudo the script itself.

While you might have your credentials and config file properly located in ~/.aws, it might not be getting picked up by your user account.

Run this command to see if your credentials have been set: aws configure list

To set the credentials, run this command: aws configure and then enter the credentials that are specified in your ~/.aws/credentials file.

Answering in case someone stumbles across this based on the question's title.

I had the same problem where by the AWS CLI was reporting unable to locate credentials .

I had removed the [default] set of credentials from my credentials file as I wasn't using them and didn't think they were needed. It seems that they are.

I then reformed my file as follows and it worked...

[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2

[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2

This isn't necessarily related to the original question, but I came across this when googling a related issue, so I'm going to write it up in case it may help anyone else. I set up aws on a specific user, and tested using sudo -H -u thatuser aws ... , but it didn't work with awscli 1.2.9 installed on Ubuntu 14.04:

  % sudo -H -u thatuser aws configure list
        Name                    Value             Type    Location
        ----                    -----             ----    --------
     profile                <not set>             None    None
  access_key                <not set>             None    None
  secret_key                <not set>             None    None
      region                us-east-1      config_file    ~/.aws/config

I had to upgrade it using pip install awscli , which brought in newer versions of awscli (1.11.93), boto, and a myriad of other stuff (awscli docutils botocore rsa s3transfer jmespath python-dateutil pyasn1 futures), but it resulted in things starting to work properly:

  % sudo -H -u thatuser aws configure list
        Name                    Value             Type    Location
        ----                    -----             ----    --------
     profile                <not set>             None    None
  access_key     ****************WXYZ shared-credentials-file
  secret_key     ****************wxyz shared-credentials-file
      region                us-east-1      config-file    ~/.aws/config

The unable to locate credentials error usually occurs when working with different aws profiles and the current terminal can't identify the credentials for the current profile.

Notice that you don't need to fill all the credentials via aws configure each time - you just need to reference to the relevant profile that was configured once.

From the Named profiles section in AWS docs:

The AWS CLI supports using any of multiple named profiles that are stored in the config and credentials files. You can configure additional profiles by using aws configure with the --profile option, or by adding entries to the config and credentials files.

The following example shows a credentials file with two profiles. The first [default] is used when you run a CLI command with no profile. The second is used when you run a CLI command with the
--profile user1 parameter.

~/.aws/credentials (Linux & Mac) or %USERPROFILE%\\.aws\\credentials (Windows):

 [default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY [user1] aws_access_key_id=AKIAI44QH8DHBEXAMPLE aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY

So, after setting up the specific named profile ( user1 in the example above) via aws configure or directly in the ~/.aws/credentials file you can select the specific profile:

aws ec2 describe-instances --profile user1

Or export it to terminal:

$ export AWS_PROFILE=user1

Was hitting this error today when running aws cli on EC2 . My situations is I could get credentials info when running aws configure list . However I am running in a corporate environment that doing things like aws kms decrypt requires PROXY . As soon as I set proxy, the aws credentials info will be gone.

export HTTP_PROXY=aws-proxy-qa.cloud.myCompany.com:8099
export HTTPS_PROXY=aws-proxy-qa.cloud.myCompany.com:8099

Turns out I also have to set NO_PROXY and have the ec2 metadata address in the list 169.254.169.254 . Also, since you should be going via an s3 endpoint, you should normally have .amazonaws.com in the no_proxy too.

export NO_PROXY=169.254.169.254,.amazonaws.com

A foolish and cautionary tail of a rusty script slinger:

I had defined the variable HOME in my script as a place were the script should go to build the platform.

This variable overwrote the env var that defines the shell users $HOME . So the AWS command could not find ~/.aws/credentials because ~ was referencing the wrong place.

I hate to admit it, but I hope it helps saves someone some time.

If you are using a .aws/config file with roles ensure sure your config file is correctly formatted. In my case I had forgotten to put the role_arn = in front of the arn. The default profile sits in the .aws/credentials file and contains the access key id and secret access key of the iam identity.

The config file contains the role details:

[profile myrole]
role_arn = arn:aws:iam::123456789012:role/My-Role
source_profile = default
mfa_serial = arn:aws:iam::987654321098:mfa/my-iam-identity
region=ap-southeast-2

You can quickly test access by calling

aws sts get-caller-identity --profile myrole

If you have MFA enabled like I have you will need to enter it when prompted.

Enter MFA code for arn:aws:iam::987654321098:mfa/my-iam-identity:
{
    "UserId": "ARABCDEFGHIJKLMNOPQRST:botocore-session-15441234567",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/My-Role/botocore-session-15441234567"
}

I ran into this trying to run an aws-cli command from roots cron.

Since credentials are stored in $HOME/.aws/credentials and I initialized aws-cli through sudo , $HOME is still /home/user/. When running from cron, $HOME is /root/ and thus cron cannot find the file.

The fix was to change $HOME for the specific cron job. Example:

00 12 * * * HOME=/home/user aws s3 sync s3://...

(alternatives includes moving, copying or symlinking the .aws dir, from /home/user/ to /root/)

尝试使用像sudo aws ec2 command命令这样的 aws 命令添加 sudo,是的,正如meuh提到的,需要使用 sudo 配置 awscli

pip install --upgrade awscli

or

pip3 install --upgrade awscli

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM