[英]How to make k8s cpu and memory HPA work together?
I'm using a k8s HPA template for CPU and memory like below:我正在为 CPU 和 memory 使用 k8s HPA 模板,如下所示:
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{.Chart.Name}}-cpu
labels:
app: {{.Chart.Name}}
chart: {{.Chart.Name}}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{.Chart.Name}}
minReplicas: {{.Values.hpa.min}}
maxReplicas: {{.Values.hpa.max}}
targetCPUUtilizationPercentage: {{.Values.hpa.cpu}}
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{.Chart.Name}}-mem
labels:
app: {{.Chart.Name}}
chart: {{.Chart.Name}}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{.Chart.Name}}
minReplicas: {{.Values.hpa.min}}
maxReplicas: {{.Values.hpa.max}}
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageValue: {{.Values.hpa.mem}}
Having two different HPA is causing any new pods spun up for triggering memory HPA limit to be immediately terminated by CPU HPA as the pods' CPU usage is below the scale down trigger for CPU.拥有两个不同的 HPA 会导致任何新的 Pod 启动以触发 memory HPA 限制立即被 CPU HPA 终止,因为 Pod 的 CPU 使用率低于 CPU 的缩减触发器。 It always terminates the newest pod spun up, which keeps the older pods around and triggers the memory HPA again, causing an infinite loop.
它总是终止最新的 pod 旋转,从而保留旧的 pod 并再次触发 memory HPA,从而导致无限循环。 Is there a way to instruct CPU HPA to terminate pods with higher usage rather than nascent pods every time?
有没有办法指示 CPU HPA 每次都终止使用率较高的 pod 而不是新生的 pod?
As per the suggestion in comments, using a single HPA solved my issue.根据评论中的建议,使用单个 HPA 解决了我的问题。 I just had to move CPU HPA to same apiVersion as memory HPA.
我只需要将 CPU HPA 移动到与 memory HPA 相同的 apiVersion。
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 100Mi
When created, the Horizontal Pod Autoscaler monitors the nginx Deployment for average CPU utilization, average memory utilization, and (if you uncommented it) the custom packets_per_second metric.创建后,Horizontal Pod Autoscaler 会监视 nginx 部署的平均 CPU 利用率、平均 memory 利用率以及(如果您取消注释)自定义数据包每秒指标。 The Horizontal Pod Autoscaler autoscales the Deployment based on the metric whose value would create the larger autoscale event.
Horizontal Pod Autoscaler 根据其值将创建更大的自动缩放事件的指标自动缩放部署。
https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-apply
https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-apply
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.