簡體   English   中英

Django Celery定期任務運行但RabbitMQ隊列不被消耗

[英]Django Celery Periodic Tasks Run But RabbitMQ Queues Aren't Consumed

在通過celery的周期性任務調度程序運行任務后,為什么我在RabbitMQ中有這么多未使用的隊列?

建立

  • 在Heroku上運行的Django Web應用程序
  • 通過芹菜打敗的任務
  • 任務通過芹菜工人運行
  • 消息代理是來自ClouldAMQP的RabbitMQ

Procfile

web: gunicorn --workers=2 --worker-class=gevent --bind=0.0.0.0:$PORT project_name.wsgi:application
scheduler: python manage.py celery worker --loglevel=ERROR -B -E --maxtasksperchild=1000
worker: python manage.py celery worker -E --maxtasksperchild=1000 --loglevel=ERROR

settings.py

CELERYBEAT_SCHEDULE = {
    'do_some_task': {
        'task': 'project_name.apps.appname.tasks.some_task',
        'schedule': datetime.timedelta(seconds=60 * 15),
        'args': ''
    },
}

tasks.py

@celery.task
def some_task()
    # Get some data from external resources
    # Save that data to the database
    # No return value specified

結果

每次任務運行時,我都會(通過RabbitMQ Web界面):

  • 我的“排隊消息”下的“就緒”狀態中的另一條消息
  • 一個附加隊列,其中一條消息處於“就緒”狀態
    • 此隊列沒有列出的使用者

它最終成為我對CELERY_RESULT_BACKEND設置。

以前,它是:

CELERY_RESULT_BACKEND = 'amqp'

在我將其更改為:RabbitMQ后,我不再擁有未使用的消息/隊列:

CELERY_RESULT_BACKEND = 'database'

發生的事情似乎是,在執行任務之后,芹菜通過rabbitmq發回有關該任務的信息,但是,沒有任何設置來消耗這些響應消息,因此一堆未讀的消息最終在隊列中。

注意:這意味着芹菜將添加記錄任務結果的數據庫條目。 為了防止我的數據庫被無用的消息加載,我補充說:

# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400

來自Settings.py的相關部分

########## CELERY CONFIGURATION
import djcelery
# https://github.com/celery/django-celery/
djcelery.setup_loader()

INSTALLED_APPS = INSTALLED_APPS + (
    'djcelery',
)

# Compress all the messages using gzip
# http://celery.readthedocs.org/en/latest/userguide/calling.html#compression
CELERY_MESSAGE_COMPRESSION = 'gzip'

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-transport
BROKER_TRANSPORT = 'amqplib'

# Set this number to the amount of allowed concurrent connections on your AMQP
# provider, divided by the amount of active workers you have.
#
# For example, if you have the 'Little Lemur' CloudAMQP plan (their free tier),
# they allow 3 concurrent connections. So if you run a single worker, you'd
# want this number to be 3. If you had 3 workers running, you'd lower this
# number to 1, since 3 workers each maintaining one open connection = 3
# connections total.
#
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-pool-limit
BROKER_POOL_LIMIT = 3

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-connection-max-retries
BROKER_CONNECTION_MAX_RETRIES = 0

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-url
BROKER_URL = os.environ.get('CLOUDAMQP_URL')

# Previously, had this set to 'amqp', this resulted in many read / unconsumed
# queues and messages in RabbitMQ
# See: http://docs.celeryproject.org/en/latest/configuration.html#celery-result-backend
CELERY_RESULT_BACKEND = 'database'

# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400
########## END CELERY CONFIGURATION

看起來你正在從你消耗的任務中得到回復。

你可以這樣做:

@celery.task(ignore_result=True)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM