I set up Kubernetes on CoreOS on bare metal using the generic install scripts . It's running the current stable release, 1298.6.0, with Kubernetes version 1.5.4.
We'd like to have a highly available master setup, but we don't have enough hardware at this time to dedicate three servers to serving only as Kubernetes masters, so I would like to be able to allow user pods to be scheduled on the Kubernetes master. I set --register-schedulable=true in /etc/systemd/system/kubelet.service but it still showed up as SchedulingDisabled.
I tried to add settings for including the node as a worker, including adding worker TLS certs to /etc/kubernetes/ssl, adding those settings to kubelet.service, adding an /etc/kubernetes/worker-kubeconfig.yaml that pointed to those certs, and added that information to the /etc/kubernetes/manifests/kube-proxy.yaml. I used my existing nodes as a template for what to add. This registered another node under the master's hostname and then both it and the original master node showed up as NotReady,SchedulingDisabled.
This question indicates that scheduling pods on the master node should be possible, but there is barely anything else that I can find on the subject.
如果您使用的是 Kubernetes 1.7 及更高版本:
kubectl taint node mymasternode node-role.kubernetes.io/master:NoSchedule-
使用以下命令清除所有主控
kubectl taint nodes --all node-role.kubernetes.io/master-
First, get the name of the master
kubectl get nodes
NAME STATUS ROLES AGE VERSION
yasin Ready master 11d v1.13.4
as we can see there is one node with the name of yasin
and the role is master
. If we want to use it as worker we should run
kubectl taint nodes yasin node-role.kubernetes.io/master-
For anyone using kops on AWS. I wanted to enable scheduling of Pods on master.
$ kubectl get nodes -owide
was giving me this output:
NAME STATUS
...
...
ip-1**-**-**-***.********.compute.internal Ready node
ip-1**-**-**-***.********.master.internal Ready,SchedulingDisabled master
^^^^^^^^^^^^^^^^^^
ip-1**-**-**-***.********.compute.internal Ready node
...
...
And $ kubectl describe nodes ip-1**-**-**-***.********.master.internal
:
...
...
Taints: <none>
Unschedulable: true
... ^^^^
...
Patching the master with this command:
$ kubectl patch node MASTER_NAME -p "{\"spec\":{\"unschedulable\":false}}"
worked for me and scheduling of Pods is now enabled.
Ref: https://github.com/kubernetes/kops/issues/639#issuecomment-287015882
I don't know why the master node shows up as NotReady
; it shouldn't. Try executing kubectl describe node mymasternode
to find out.
The SchedulingDisabled
is because the master node is tainted with dedicated=master:NoSchedule
Execute this command against all your masters to remove the taint:
kubectl taint nodes mymasternode dedicated-
To understand why that works read up on taints and tolerations .
kubectl taint node --all node-role.kubernetes.io/master:NoSchedule-
kubectl describe node | egrep -i taint
Taints: <none>
kubectl run -it busybox-$RANDOM --image=busybox --restart=Never -- date
This answer is a combination of other SO answers, from Victor G, Aryak Sengupta, and others.
Official kubernetes documentation: node-role-kubernetes-io-master
So for versions +v1.20 the solution is :
kubectl taint node <master-node> node-role.kubernetes.io/control-plane:NoSchedule- kubectl taint node <master-node> node-role.kubernetes.io/master:NoSchedule-
Another way to list all taints in nodes and untaint the tainted one.
root@lab-a:~# kubectl get nodes -o json | jq ".items[]|{name:.metadata.name, taints:.spec.taints}"
{
"name": "lab-a",
"taints": null
}
{
"name": "lab-b",
"taints": [
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]
}
lab-a does not have any taint. so we untaint lab-b:
root@lab-a:~# k taint node lab-b node-role.kubernetes.io/master:NoSchedule-
node/lab-b untainted
Install jq in ubuntu by: apt-get install jq
Since Openshift 4.x CoreOs is directly integrated on Kubernetes configuration (you can make all masters schedulable this way
# edit the field spec.mastersSchedulable to set a value true
$ oc patch schedulers.config.openshift.io cluster --type json \
-p '[{"op": "add", "path": "/spec/mastersSchedulable", "value": true}]'
or using
oc edit schedulers.config.openshift.io cluster
and edit the field
spec:
mastersSchedulable: true
The answer is
kubectl taint nodes --all node-role.kubernetes.io/master-
according to: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#control-plane-node-isolation
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.