简体   繁体   English

无法使用创建它的角色访问 EKS 集群

[英]Unable to access EKS cluster using the role that created it

I created an EKS cluster from an EC2 instance with my-cluster-role added to instance profile using aws cli:我使用 aws cli 从 EC2 实例创建了一个 EKS 集群,并将my-cluster-role添加到实例配置文件中:

aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::012345678910:role/my-cluster-role --resources-vpc-config subnetIds=subnet-abcd123,subnet-wxyz345,securityGroupIds=sg-123456,endpointPublicAccess=false,endpointPrivateAccess=true

Kubeconfig file: Kubeconfig 文件:

aws eks --region us-east-1 update-kubeconfig --name my-cluster

But while trying to access Kubernetes resources, I get below error:但是在尝试访问 Kubernetes 资源时,出现以下错误:

[root@k8s-mgr ~]# kubectl get deployments --all-namespaces
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:i-xxxxxxxx" cannot list resource "deployments" in API group "apps" at the cluster scope

Except for pods and services, no other resource is accessible.除了 pod 和服务,没有其他资源可以访问。

Note that the cluster was created using the role my-cluster-role , as per the documentation, this role should have permissions to access the resources.请注意,集群是使用角色my-cluster-role创建的,根据文档,该角色应该有权访问资源。

[root@k8s-mgr ~]# aws sts get-caller-identity
{
    "Account": "012345678910", 
    "UserId": "ABCDEFGHIJKKLMNO12PQR:i-xxxxxxxx", 
    "Arn": "arn:aws:sts::012345678910:assumed-role/my-cluster-role/i-xxxxxxxx"
}

Edit: Tried creating ClusterRole and ClusterRoleBinding as suggested here: https://stackoverflow.com/a/70125670/7654693编辑:尝试按照此处的建议创建 ClusterRole 和 ClusterRoleBinding: https://stackoverflow.com/a/70125670/7654693

Error:错误:

[root@k8s-mgr]# kubectl apply -f access.yaml 
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "eks-console-dashboard-full-access-clusterrole", Namespace: ""
from server for: "access.yaml": clusterroles.rbac.authorization.k8s.io "eks-console-dashboard-full-access-clusterrole" is forbidden: User "system:node:i-xxxxxxxx" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "eks-console-dashboard-full-access-binding", Namespace: ""

Below is my Kubeconfig:下面是我的 Kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERT
    server: SERVER ENDPOINT
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
    user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws

Create a cluster role and cluster role binding, or a role and role binding创建集群角色和集群角色绑定,或者角色和角色绑定

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  - pods
  verbs:
  - get
  - list
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - statefulsets
  - replicasets
  verbs:
  - get
  - list
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eks-console-dashboard-full-access-binding
subjects:
- kind: Group
  name: eks-console-dashboard-full-access-group
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: eks-console-dashboard-full-access-clusterrole
  apiGroup: rbac.authorization.k8s.io

You can read more at: https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/您可以在以下位置阅读更多信息: https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/

Update role更新角色

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERT
    server: SERVER ENDPOINT
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
    user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      - --role
      - arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
      command: aws

add role details to config将角色详细信息添加到配置

      - --role
      - arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
      command: aws
      env:
      - name: AWS_PROFILE
        value: my-prod

or else要不然

  - --role-arn
  - arn:aws:iam::1213:role/eks-cluster-admin-role-dfasf
  command: aws-vault
  env: null

There is a apparently a mismatch between the IAM user, that created the cluster, and the one is taken from your kubeconfig file while authenticating to your EKS cluster.显然,创建集群的 IAM 用户与在对 EKS 集群进行身份验证时从您的 kubeconfig 文件中获取的用户不匹配。 You can tell it by RBAC's error output.您可以通过 RBAC 的错误 output 来判断。

The quote from aws eks cli 's reference来自aws eks cli的引用

--role-arn (string) To assume a role for cluster authentication, specify an IAM role ARN with this option. --role-arn (string) 要代入集群身份验证角色,请使用此选项指定 IAM 角色 ARN。 For example, if you created a cluster while assuming an IAM role, then you must also assume that role to connect to the cluster the first time.例如,如果您在代入 IAM 角色时创建了一个集群,那么您还必须代入该角色才能首次连接到该集群。

Probable solution , please update your kubeconfig file accordingly with command:可能的解决方案,请使用以下命令相应地更新您的 kubeconfig 文件:

aws eks my-cluster update-kubeconfig --role-arn arn:aws:iam::012345678910:role/my-cluster-role

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM