[英]How to auto scale helm chart statefulsets
I have installed the rabbitmq cluster using helm chart. 我已经使用舵图安装了rabbitmq集群。 Rabbitmq using statefulsets so is there any way to auto scale this ?
Rabbitmq使用statefulsets,是否有任何方法可以自动缩放?
Also one more question how to auto scale (HPA) deployment having PVC ? 另外还有一个问题,如何自动缩放具有PVC的HPA?
StatefulSets can be autoscaled with HPA: 可以使用HPA自动缩放StatefulSet:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
name: some-service
spec:
maxReplicas: 4
metrics:
- resource:
name: memory
targetAverageUtilization: 80
type: Resource
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: some-service
Regarding PVC and StatefulSets and HPA - I'm not sure but I think that depends on reclaimPolicy of StorageClass of your PVC. 关于PVC和StatefulSets和HPA-我不确定,但是我认为这取决于PVC的StorageClass的reclaimPolicy。 Just make sure you have
reclaimPolicy: Retain
in your StorageClass definition. 只要确保您具有
reclaimPolicy: Retain
在StorageClass定义中即可。 Having that you should preserve data on scaling events. 有了它,您应该保留有关扩展事件的数据。
If you mean Deployments with HPA and PVC - it should work, but always remember that if you have multiple replicas with one shared PVC - all replicas will try to mount it. 如果您指的是使用HPA和PVC进行部署-应该可以,但是请始终记住,如果您的多个PVC副本具有一个共享的PVC-所有副本都将尝试挂载它。 If PVC is ReadWriteMany - there should be no issues.
如果PVC是ReadWriteMany-应该没有问题。 If it is ReadWriteOnce - then all replicas will be scheduled on one node.
如果它是ReadWriteOnce-那么所有副本都将安排在一个节点上。 If there is not enough resources on node to fit all replicas - you will get some pods in Pending state forever.
如果节点上没有足够的资源来容纳所有副本,那么您将永远处于“待处理”状态。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.