简体   繁体   English

有没有办法阻止kubectl取消注册kubernetes节点?

[英]Is there a way to prevent kubectl from de-registering kubernetes nodes?

I was testing some commands and I ran 我正在测试一些命令而且我跑了

$ kubectl delete nodes --all

and it deletes de-registers all the nodes including the masters. 删除 所有节点(包括主节点)的注销。 Now I can't connect to the cluster (Well, Obviously as the master is deleted). 现在我无法连接到群集(嗯,显然当主人被删除)。

Is there a way to prevent this as anyone could accidentally do this? 有没有办法防止这种情况,因为任何人都可能意外地这样做?

Extra Info: I am using KOps for deployment. 额外信息:我正在使用KOps进行部署。

PS It does not delete the EC2 instances and the nodes come up on doing a EC2 instance reboot on all the instances. PS它不会删除EC2实例,并且节点会在所有实例上重新启动EC2实例。

By default, you using something like a superuser who can do anything he want with a cluster. 默认情况下,您使用类似超级用户的东西,他可以对群集执行任何他想做的事情。

For limit access to a cluster for other users you can use RBAC authorization for. 要为其他用户限制对群集的访问,可以使用RBAC授权。 By RBAC rules you can manage access and limits per resource and action. 通过RBAC规则,您可以管理每个资源和操作的访问和限制。

In few words, for do that you need to: 简而言之,为此,您需要:

  1. Create new cluster by Kops with --authorization RBAC or modify existing one by adding 'rbac' option to cluster's configuration to 'authorization' section: 使用--authorization RBAC通过Kops创建新集群,或者通过将“rbac”选项添加到集群的配置到“授权”部分来修改现有集群:

    authorization: rbac: {}

  2. Now, we can follow that instruction from Bitnami for create a user. 现在,我们可以遵循的是从Bitnami指令创建一个用户。 For example, let's creating a user which has access only to office namespace and only for a few actions. 例如,让我们创建一个只能访问office命名空间并且只能执行少量操作的用户。 So, we need to create a namespace firs: 所以,我们需要创建一个命名空间:

    kubectl create namespace office

  3. Create a key and certificates for new user: 为新用户创建密钥和证书:

    openssl genrsa -out employee.key 2048
    openssl req -new -key employee.key -out employee.csr -subj "/CN=employee/O=bitnami"

  4. Now, using your CA authority key (It available in the S3 bucket under PKI) we need to approve new certificate: 现在,使用您的CA权限密钥(它在PKI下的S3存储桶中可用),我们需要批准新证书:

    openssl x509 -req -in employee.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500

  5. Creating credentials: 创建凭据:

    kubectl config set-credentials employee --client-certificate=/home/employee/.certs/employee.crt --client-key=/home/employee/.certs/employee.key

  6. Setting a right context: 设置正确的上下文:

    kubectl config set-context employee-context --cluster=YOUR_CLUSTER_NAME --namespace=office --user=employee

  7. New we have a user without access to anything. 新的我们有一个用户无法访问任何东西。 Let's create a new role with limited access, here is example of Role which will have access only to deployments, replicasets and pods for create, delete and modify them and nothing more. 让我们创建一个具有有限访问权限的新角色,这里是Role的示例,它只能访问部署,复制和pod,用于创建,删除和修改它们,仅此而已。 Create file role-deployment-manager.yaml with Role configuration: 使用角色配置创建文件role-deployment-manager.yaml

kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: office name: deployment-manager rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

  1. Create a new file rolebinding-deployment-manager.yaml with Rolebinding, which will attach your Role to user: 使用rolebinding-deployment-manager.yaml创建一个新文件rolebinding-deployment-manager.yaml ,它将您的角色附加到用户:

kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: deployment-manager-binding namespace: office subjects: - kind: User name: employee apiGroup: "" roleRef: kind: Role name: deployment-manager apiGroup: ""

  1. Now apply that configurations: 现在应用该配置:

kubectl create -f role-deployment-manager.yaml kubectl create -f rolebinding-deployment-manager.yaml

So, now you have a user with limited access and he cannot destroy your cluster. 因此,现在您的用户访问受限,并且无法销毁您的群集。

Anton Kostenko describes a good way of preventing what you've described. Anton Kostenko描述了一种预防你所描述内容的好方法。 Below I give details of how you can ensure the apiserver remains accessible even if someone does accidentally delete all the node objects: 下面我详细介绍如何确保即使有人意外删除了所有节点对象,仍然可以访问apiserver:

Losing connectivity to the apiserver by deleting node objects will only happen if the components necessary for connecting to the apiserver (eg the apisever itself and etcd) are managed by a component (ie the kubelet) that depends on the apiserver being up (GKE for example can scale down to 0 worker nodes, leaving no node objects, but the apiserver will still be accessible). 只有当连接到apiserver所需的组件(例如apisever本身和etcd)由依赖于apiserver up的组件(即kubelet)管理时才会失去与apiserver连接的连接(例如GKE)可以缩小到0个工作节点,不留任何节点对象,但仍然可以访问apiserver)。

As a specific example, my personal cluster has a single master node with all the control plane components described as static Pod manifests and placed in the directory referred to by the --pod-manifest-path flag on the kubelet on that master node. 作为一个具体示例,我的个人群集具有单个主节点,其中所有控制平面组件被描述为静态Pod清单,并放置在该主节点上的kubelet上的--pod-manifest-path标志所引用的目录中。 Deleting all the node objects as you did in the question caused all my workloads to go into a pending state but the apiserver was still accessible in this case because the control plane components are run regardless of whether the kubelet can access the apiserver. 像在问题中那样删除所有节点对象导致我的所有工作负载都进入挂起状态,但在这种情况下仍然可以访问apiserver,因为无论kubelet是否可以访问apiserver,控制平面组件都会运行。

Common ways to prevent what you've just described is to run the apiserver and etcd as static manifests managed by the kubelet as I just described or to run them independently of any kubelet, perhaps as systemd units. 阻止你刚刚描述的内容的常用方法是将apiserver和etcd作为我刚刚描述的kubelet管理的静态清单运行,或者独立于任何kubelet运行它们,可能作为systemd单元。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM