简体   繁体   English

使用假定的 IAM 角色从 EKS pod 访问远程 EKS 集群

[英]Access remote EKS cluster from an EKS pod, using an assumed IAM role

I've gone over this guide to allow one of the pods running on my EKS cluster to access a remote EKS cluster using kubectl.我已经阅读了本指南,以允许在我的 EKS 集群上运行的其中一个 Pod 使用 kubectl 访问远程 EKS 集群。

I'm currently running a pod using amazon/aws-cli inside my cluster, mounting a service account token which allows me to assume an IAM role configured with kubernetes RBAC according to the guide above.我目前正在我的集群中使用amazon/aws-cli运行一个 pod,挂载一个服务帐户令牌,它允许我根据上面的指南承担使用 kubernetes RBAC 配置的 IAM 角色。 I've made sure that the role is correctly assumed by running aws sts get-caller-identity and this is indeed the case.我已经通过运行aws sts get-caller-identity确保正确承担了这个角色,事实确实如此。

I've now installed kubectl and configured kube/config like so -我现在已经安装了 kubectl 并像这样配置了 kube/config -

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <redacted>
    server: <redacted>
  name: <cluster-arn>
contexts:
- context:
    cluster: <cluster-arn>
    user: <cluster-arn>
  name: <cluster-arn>
current-context: <cluster-arn>
kind: Config
preferences: {}
users:
- name: <cluster-arn>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      - --role
      - <role-arn>
      command: aws
      env: null

However, every operation I try to carry out using kubectl results in this error -但是,我尝试使用 kubectl 执行的每个操作都会导致此错误 -
error: You must be logged in to the server (Unauthorized)

I've no idea what did I misconfigure, and would appreciate any idea on how to get a more verbose error message.我不知道我配置错误的是什么,如果有任何关于如何获得更详细的错误消息的想法,我将不胜感激。

If the AWS CLI is already using the identity of the role you want, then it's not needed to specify the --role & <role-arn> in the kubeconfig args.如果 AWS CLI 已经在使用您想要的角色的身份,则不需要在 kubeconfig args 中指定--role & <role-arn>

By leaving them in, your role from aws sts get-caller-identity will need to have sts:AssumeRole permissions for the role <role-arn> .通过保留它们,您的aws sts get-caller-identity角色将需要具有角色<role-arn> sts:AssumeRole权限。 If they are the same, then the role needs to be able to assume itself - which is redundant.如果它们相同,则角色需要能够自行承担——这是多余的。

So I'd try remove those args from the kubeconfig.yml and see if it helps.所以我会尝试从kubeconfig.yml删除这些参数,看看它是否有帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM