简体   繁体   English

Celery不使用Redis处理Kubernetes中的任务

[英]Celery Does Not Process Task in Kubernetes with Redis

I'm running a Kubernetes cluster with three Celery pods, using a single Redis pod as the message queue. 我正在运行一个Kubernetes集群,其中包含三个Celery吊舱,并使用单个Redis吊舱作为消息队列。 Celery version 4.1.0, Python 3.6.3, standard Redis pod from helm. Celery版本4.1.0,Python 3.6.3,掌舵人的标准Redis pod。

At a seemingly quick influx of tasks, the Celery pods to stop processing tasks whatsoever. 在看似快速的任务涌入时,芹菜荚停止任何处理任务。 They will be fine for the first few tasks, but then eventually stop working and my tasks hang. 他们适合前几个任务,但最终停止工作,我的任务挂起。

My tasks follow this format: 我的任务遵循以下格式:

@app.task(bind=True)
def my_task(some_param):
    result = get_data(some_param)

    if result != expectation:
        task.retry(throw=False, countdown=5)

And are generally queued as follows: 并且一般排队如下:

from my_code import my_task
my_task.apply_async(queue='worker', kwargs=celery_params)

The relevant portion of my deployment.yaml : 我的deployment.yaml的相关部分:

command: ["celery", "worker", "-A", "myapp.implementation.celery_app", "-Q", "http"]

The only difference between this cluster and my local cluster, which I use docker-compose to manage, is that the cluster is running a prefork pool and locally I run eventlet pool to be able to put together a code coverage report. 该集群和我的本地集群,这是我使用的唯一区别docker-compose管理,是集群正在运行prefork游泳池和本地我跑eventlet池能够把一个代码覆盖率报告。 I've tried running eventlet on the cluster but I see no difference in the results, the tasks still hang. 我尝试在集群上运行eventlet ,但是结果没有区别,任务仍然挂起。

Is there something I'm missing about running a Celery worker in Kubernetes? 在Kubernetes中运行芹菜工作者是否缺少我的东西? Is there a bug that could be affecting my results? 是否有可能会影响我的结果的错误? Are there any good ways to break into the cluster to see what's actually happening with this issue? 是否有任何好的方法可以进入群集以查看此问题的实际情况?

Running the celery tasks without apply_async allowed me to debug this issue, showing that there was a concurrency logic error in the Celery tasks. 在没有apply_async情况下运行celery任务使我得以调试此问题,这表明Celery任务中存在并发逻辑错误。 I highly recommend this method of debugging Celery tasks. 我强烈推荐这种调试Celery任务的方法。

Instead of: 代替:

from my_code import my_task

celery_params = {'key': 'value'}
my_task.apply_async(queue='worker', kwargs=celery_params)

I used: 我用了:

from my_code import my_task

celery_params = {'key': 'value'}
my_task(**celery_params)

This allowed me to locate the concurrency issue. 这使我能够找到并发问题。 After I had found the bug, I converted the code back to an asynchronous method call using apply_async . 发现错误后,我使用apply_async将代码转换回异步方法调用。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM