[英]User cannot log into EKS Cluster using kubectl
I am trying to host an application in AWS Elastic Kubernetes Service(EKS) .我正在尝试在AWS Elastic Kubernetes Service(EKS)中托管应用程序。 I have configured the EKS cluster using the AWS Console using an IAM user (user1) .
我已经使用IAM 用户 (user1)使用 AWS 控制台配置了 EKS 集群。 Configured the Node Group and added a Node to the EKS Cluster and everything is working fine.
配置节点组并将节点添加到 EKS 集群,一切正常。
In order to connect to the cluster, I had spin up an EC2 instance (Centos7) and configured the following:为了连接到集群,我启动了一个 EC2 实例(Centos7)并配置了以下内容:
1. Installed docker, kubeadm, kubelet and kubectl. 1.安装docker、kubeadm、kubelet和kubectl。
2. Installed and configured AWS Cli V2. 2. 安装并配置 AWS Cli V2。
I had used the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID of user1 to configure AWS Cli from within the EC2 Instance in order to connect to the cluster using kubectl.我使用 user1 的AWS_ACCESS_KEY_ID和AWS_SECRET_KEY_ID从 EC2 实例中配置 AWS Cli,以便使用 kubectl 连接到集群。
I ran the below commands in order to connect to the cluster as user1:为了以 user1 身份连接到集群,我运行了以下命令:
1. aws sts get-caller-identity 1. aws sts 获取调用者身份
2. aws eks update-kubeconfig --name trojanwall --region ap-south-1 2. aws eks update-kubeconfig --name trojanwall --region ap-south-1
I am able to do each and every operations in the EKS cluster as user1.我能够以 user1 身份执行 EKS 集群中的每一项操作。
However, I have now create a new user named ' user2 ' and I have replaced the current AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID with that of user2.但是,我现在创建了一个名为“ user2 ”的新用户,并将当前的AWS_ACCESS_KEY_ID和AWS_SECRET_KEY_ID替换为 user2。 Did the same steps and when I try to run ' kubectl get pods ', I am getting the following error:
做了相同的步骤,当我尝试运行“ kubectl get pods ”时,我收到以下错误:
error: You must be logged in to the server (Unauthorized)
错误:您必须登录到服务器(未经授权)
Result after running kubectl describe configmap -n kube-system aws-auth as user1:运行kubectl describe configmap -n kube-system aws-auth as user1 后的结果:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXXXXXXXXXXX:role/AWS-EC2-Role
username: system:node:{{EC2PrivateDNSName}}
BinaryData
====
Events: <none>
Does anyone know how to resolve this?有谁知道如何解决这个问题?
When you create an EKS cluster, only the user that created a cluster has access to it.创建 EKS 集群时,只有创建集群的用户才能访问它。 In order to allow someone else to access the cluster, you need to add that user to the aws-auth.
为了允许其他人访问集群,您需要将该用户添加到 aws-auth。 To do this, in your
data
section, add为此,在您的
data
部分中,添加
mapUsers: |
- userarn: arn:was:iam::<your-account-id>:user/<your-username>
username: <your-username>
groups:
- systems:masters
You can use different groups, based on the rights you want to give to that user.您可以根据要授予该用户的权限使用不同的组。
If you don't already have a config map on your machine:如果您的机器上还没有配置 map:
curl -o aws-auth-cm.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/aws-auth-cm.yaml
curl -o aws-auth-cm.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml
You can also follow steps from the documentation (it's more detailed)您还可以按照文档中的步骤进行操作(更详细)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.