简体   繁体   中英

Enabling RBAC on Existing GKE Cluster

We have been running a cluster on GKE for around three years. As such, legacy authorization is enabled.

The control plane has been getting updated automatically, and our node pools are running a mixture of 1.12 and 1.14.

We have an increasing number of services, and are planning on incrementally adopting istio.

We want to enable a minimal RBAC setup without causing errors and downtime of our services.

I haven't been able to find any guides for how to accomplish this. Some people say just to enable RBAC authorization on the GKE cluster, but I assume that would take down all of our services.

It has also been implied that k8s can run in a hybrid ABAC/RBAC mode, but we can't tell if it is or not!

Is there a good guide for migrating to RBAC for GKE?

If you cluster is Regional you won't have downtime in your application when upgrade, but if your cluster is single-zonal or multi-zonal the best approach here is:

  1. Add a new node pool
  2. Cordon the old node pool to migrate the applications to the new node pool
  3. Delete the old node pool after all pods are migrated.

It is the safesty way to update your node pool (zonal) without downtimes. Please read the references below to understand in details every step.

References:

https://kubernetes.io/docs/concepts/architecture/nodes/#reliability https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-nodes-and-cluster

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM