[英]New Kubernetes service account appears to have cluster admin permissions
I'm experiencing a strange behavior from newly created Kubernetes service accounts.我在新创建的 Kubernetes 服务帐户中遇到了奇怪的行为。 It appears that their tokens provide limitless access permissions in our cluster.
看来他们的令牌在我们的集群中提供了无限的访问权限。
If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.如果我创建一个新的命名空间,在该命名空间中创建一个新的服务帐户,然后在新的 kube 配置中使用服务帐户的令牌,我就可以在集群中执行所有操作。
# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa
# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"
SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)
# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
cluster:
certificate-authority-data: ${CA}
server: ${SERVER}
contexts:
- name: test-context
context:
cluster: test-cluster
namespace: ${NAMESPACE}
user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
user:
token: ${TOKEN}
" > config
Running that ^ as a shell script yields a config
in the current directory.将该 ^ 作为 shell 脚本运行会在当前目录中生成一个
config
。 The problem is, using that file, I'm able to read and edit all resources in the cluster.问题是,使用该文件,我能够读取和编辑集群中的所有资源。 I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.
我希望新创建的服务帐户没有权限,除非我通过 RBAC 明确授予它们。
# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces
# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod
I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl
queries using the newly generated config.我没有向新创建的服务帐户添加任何角色绑定,因此我希望它使用新生成的配置接收所有
kubectl
查询的拒绝访问响应。
Below is an example of the test-sa service account's JWT that's embedded in config
:以下是嵌入在
config
中的 test-sa 服务帐户的 JWT 示例:
{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "test-namespace",
"kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
"kubernetes.io/serviceaccount/service-account.name": "test-sa",
"kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
"sub": "system:serviceaccount:test-namespace:test-sa"
}
Things to consider...需要考虑的事情...
rbac.authorization.k8s.io/v1
and rbac.authorization.k8s.io/v1beta1
in the output of kubectl api-versions | grep rbac
kubectl api-versions | grep rbac
的输出中看到rbac.authorization.k8s.io/v1
和rbac.authorization.k8s.io/v1beta1
kubectl api-versions | grep rbac
kubectl api-versions | grep rbac
as suggested in this post . kubectl api-versions | grep rbac
。 It is notable that kubectl cluster-info dump | grep authorization-mode
kubectl cluster-info dump | grep authorization-mode
kubectl cluster-info dump | grep authorization-mode
, as suggested in another answer to the same question, doesn't show output.kubectl cluster-info dump | grep authorization-mode
不显示输出。 Could this suggest RBAC isn't actually enabled?cluster-admin
role privileges, but I would not expect those to carry over to service accounts created with it.cluster-admin
角色权限,但我不希望这些权限会延续到使用它创建的服务帐户。 Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account?我的假设是否正确,即新创建的服务帐户应该具有极其有限的集群访问权限,并且在没有将许可角色绑定附加到新服务帐户的情况下,上述场景应该是不可能的? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?
关于这里发生的事情的任何想法,或者我可以限制 test-sa 访问的方法?
You can check the permission of the service account by running command您可以通过运行命令来检查服务帐户的权限
kubectl auth can-i --list --as=system:serviceaccount:test-namespace:test-sa
If you see below output that's the very limited permission by default a service account gets.如果您看到下面的输出,则默认情况下,服务帐户获得的权限非常有限。
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I could not reproduce your issue on three different K8S versions in my test lab (including v1.15.3, v1.14.10-gke.17, v1.11.7-gke.12 - with basic auth enabled).我无法在我的测试实验室(包括 v1.15.3、v1.14.10-gke.17、v1.11.7-gke.12 - 启用基本身份验证)的三个不同 K8S 版本上重现您的问题。
Unfortunately token based log-in activities are not recorded in AuditLogs of Cloud Logs console for GKE clusters :(.不幸的是,基于令牌的登录活动未记录在 GKE 集群的 Cloud Logs 控制台的 AuditLogs 中:(。
To my knowledge only data-access operations, that go through Google Cloud are recorded (AIM based = kubectl
using google auth
provider).据我所知,只记录了通过 Google Cloud 的数据访问操作(基于 AIM =
kubectl
使用google auth
provider)。
If your "test-sa"
service account is somehow permitted to do specific operation by RBAC, I would still try to study Audit Logs of your GKE cluster.如果 RBAC 以某种方式允许您的
"test-sa"
服务帐户执行特定操作,我仍然会尝试研究您的 GKE 集群的审核日志。 Maybe somehow your service account is being mapped to google service account one, and thus authorized.也许不知何故,您的服务帐户被映射到谷歌服务帐户一,从而获得授权。
You can always contact official support channel of GCP, to troubleshot further your unusual case.您可以随时联系 GCP 的官方支持渠道,以进一步解决您的异常情况。
It turns out an overly permissive cluster-admin
ClusterRoleBinding
was bound to the system:serviceaccounts
group.事实证明,过度宽松的
cluster-admin
ClusterRoleBinding
绑定到system:serviceaccounts
组。 This resulted in all service accounts in our cluster having cluster-admin
privileges.这导致我们集群中的所有服务帐户都具有
cluster-admin
权限。
It seems like somewhere early on in the cluster's life the following ClusterRoleBinding
was created:似乎在集群生命早期的某个地方创建了以下
ClusterRoleBinding
:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
WARNING: Never apply this rule to your cluster ☝️
警告:永远不要将此规则应用于您的集群 ☝️
We have since removed this overly permissive rule and rightsized all service account permissions.此后,我们删除了这条过于宽松的规则,并调整了所有服务帐户权限。
Thank you to the folks that provided useful answers and comments to this question.感谢为这个问题提供有用答案和评论的人们。 They were helpful in determining this issue.
他们有助于确定这个问题。 This was a very dangerous RBAC configuration and we are pleased to have it resolved.
这是一个非常危险的 RBAC 配置,我们很高兴能够解决它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.