简体   繁体   English

如何根据 celery 任务队列中的任务数自动扩展 Kubernetes Pod?

[英]How to auto-scale Kubernetes Pods based on number of tasks in celery task queue?

I have a celery worker deployed on Kubernetes pods which executes a task (not very CPU intensive but takes some time to complete due to some HTTP calls).我有一个 celery 工作人员部署在 Kubernetes pod 上,它执行一项任务(不是非常占用 CPU,但由于一些 HTTP 调用需要一些时间才能完成)。 Is there any way to autoscale the pods in K8s based on the number of tasks in the task queue?有没有办法根据任务队列中的任务数自动缩放 K8s 中的 Pod?

Yes, by using the Kubernetes metrics registry and Horizontal Pod Autoscaler .是的,通过使用 Kubernetes 指标注册表和 Horizontal Pod Autoscaler

First, you need to collect the "queue length" metric from Celery and expose it through one of the Kubernetes metric APIs.首先,您需要从 Celery 收集“队列长度”指标,并通过 Kubernetes 指标 API 之一公开它。 You can do this with a Prometheus-based pipeline:您可以使用基于 Prometheus 的管道来执行此操作:

  1. Since Celery doesn't expose Prometheus metrics, you need to install an exporter that exposes some information about Celery (including the queue length) as Prometheus metrics.由于 Celery 不公开 Prometheus 指标,因此您需要安装一个导出器,将有关 Celery 的一些信息(包括队列长度)作为 Prometheus 指标公开。 For example, this exporter .例如,这个 exporter
  2. Install Prometheus in your cluster and configure it to collect the metric corresponding to the task queue length from the Celery exporter.在集群中安装Prometheus并将其配置为从 Celery 导出器收集与任务队列长度对应的指标。
  3. Install the Prometheus Adapter in your cluster and configure it to expose the "queue length" metric through the Custom Metrics API by pulling its value from Prometheus.在您的集群中安装Prometheus 适配器并将其配置为通过自定义指标 API公开“队列长度”指标,方法是从 Prometheus 中提取其值。

Now you can configure the Horizontal Pod Autoscaler to query this metric from the Custom Metrics API and autoscale your app based on it.现在,您可以配置 Horizontal Pod Autoscaler 以从自定义指标 API 中查询此指标,并根据它自动缩放您的应用程序。

For example, to scale the app between 1 and 10 replicas based on a target value for the queue length of 5:例如,要根据队列长度 5 的目标值在 1 到 10 个副本之间扩展应用程序:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Object
      object:
        metric:
          name: mycelery_queue_length
        target:
          type: value
          value: 5
        describedObject:
          apiVersion: apps/v1
          kind: Deployment
          name: mycelery

There is two parts to solve this problem: You need to collect the metrics from celery and make them available to the Kubernetes API (as custom metrics API).解决这个问题有两个部分:您需要从 celery 收集指标,并将它们提供给 Kubernetes API(作为自定义指标 API)。 Then the HorizontalPodAutoscaler can query those metrics in order to scale based on custom metrics.然后 HorizontalPodAutoscaler 可以查询这些指标,以便根据自定义指标进行缩放。

You can use Prometheus (for example) to collect metrics from Celery.您可以使用 Prometheus(例如)从 Celery 收集指标。 Then, you can expose the metrics to Kubernetes with the Prometheus Adapter .然后,您可以使用Prometheus Adapter将指标公开给 Kubernetes。 Now the metrics available in prometheus are available to Kubernetes.现在 prometheus 中可用的指标可用于 Kubernetes。

You can now define a HorizontalPodAutoscaler for your application:您现在可以为您的应用程序定义一个HorizontalPodAutoscaler

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2alpha1
metadata:
name: sample-metrics-app-hpa
spec:
    scaleTargetRef:
        kind: Deployment
        name: sample-metrics-app
    minReplicas: 2
    maxReplicas: 10
    metrics:
    - type: Object
    object:
        target:
        kind: Service
        name: sample-metrics-app
        metricName: celery_queue_length
        targetValue: 100

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM