简体   繁体   中英

GKE RBAC role / rolebinding to access node status in the cluster

I can't get a rolebinding right in order to get node status from an app which runs in a pod on GKE.

I am able to create a pod from there but not get node status. He is the role I am creating:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["nodes"]
  verbs: ["get", "watch", "list"]

This is the error I get when I do a getNodeStatus:

{
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "nodes \"gke-cluster-1-default-pool-36c26e1e-2lkn\" is forbidden: User \"system:serviceaccount:default:sa-poc\" cannot get nodes/status at the cluster scope: Unknown user \"system:serviceaccount:default:sa-poc\"",
    "reason": "Forbidden",
    "details": {
        "name": "gke-cluster-1-default-pool-36c26e1e-2lkn",
        "kind": "nodes"
    },
    "code": 403
}

I tried with some minor variations but did not succeed.

Kubernetes version on GKE is 1.8.4-gke.1

Any help appreciated, thanks!

<resource>/<subresource>权限表示为<resource>/<subresource> ,因此在该角色中,您将指定resources: ["nodes","nodes/status"]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM