简体   繁体   中英

"kubectl" not connecting to aws EKS cluster from my local windows workstation

I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;

  1. Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
  2. Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
  3. Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
  4. In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:

Command:kubectl.exe get svc

output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1

Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?

For people got into this, may be you provision eks with profile.

EKS does not add profile inside kubeconfig.

Solution:

  1. export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
  1. If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
  1. Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "general"
      env:
        - name: AWS_PROFILE
          value: "<YOUR_PROFILE_HERE>" #

Adding another option.

Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:

- name: my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args: #<--- Change the args
      - --region
      - <YOUR_REGION>
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws #<--- Change to command to aws
      env:
      - name: AWS_PROFILE
        value: <YOUR_PROFILE_HERE>

I think i got the answer for this issue; want to write down here so people will be benefit out of it. When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.

Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.

Alternatively you can set the AWS env parameters:

export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx

export AWS_SECRET_ACCESS_KEY=ssssssssss

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM