简体   繁体   中英

Cluster autoscaler - auto label node in AWS

https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

Use case - installing a deployment with nodeSelector. No existing node label matches. Autoscaler won't scale up. Anyone aware if autoscaler is capable of labeling fresh EC2 if there is demand for one?

We are deploying big umbrealla charts (60 pods+) that are replicas of real production environments. Some of the pods are crucial for whole chart to work. If it gets spreaded among a few nodes, and one of the node is having health problems more than one environment is affected. Having it fully deployed on one node reduces number of affected charts to one.

thanks

There is no such capability to create nodes with a new specification, and I don't think it is necessary as well. Imagine a typo in nodeLabel bringing up new nodes, and also these new nodes are not known to your IAC is another scary thing. Cluster autoscaler adds new nodes by updating your autoscaling group, so node is same as other nodes in that autoscaling group. if you want to hack around you can check admission-controllers , add existing nodes with this new label or modify label on the fly to one you support? But do you really need to do it?

have you tried using a separate node group for the pods that are important to you and use taints and tolerations so only those pods get scheduled on those nodes. the autoscaler will make sure you have enough nodes in that node group to run your pods

Perfect use case of using Karpenter :

  • Allow you to add node labels to match your workload on the refresh scaled up nodes
  • Groupless, much faster reaction time
  • Declarative in K8s as provisioner

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM