简体   繁体   中英

Tasks Stuck in Celery Queue

I had been trying to implement a task queue with Redis Queue that sat on top of Redis. I got rid of this and went to Celery on top of RabbitMQ based on problems I was having as described here: Redis Queue blocking

I reference the above ( unanswered) SO question as I believe the two issues are similiar enough to be potentially linked - be that code or setup on my part.

I am able to send tasks to my Celery Queue, and am able to see them sat there either by calling rabbitmqctl list_queues within my Rabbit docker container bash, or by calling

>>> add_nums.delay(2,3)

<AsyncResult: 197315b1-e18b-4945-bf0a-cc6b6b829bfb>

>>> result = add_nums.AsyncResult( 197315b1-e18b-4945-bf0a-cc6b6b829bfb)

where

>>> result.status 'PENDING'

regardless of how many times I check.

I have tried adding ignore_result=True within the decorator call but this has no effect.

my worker class:

./workerA.py
from celery import Celery
from celery.utils.log import get_task_logger

logger = get_task_logger( __name__)

# Celery configuration
CELERY_BROKER_URL = 'amqp://***:***@rabbit:5672/' #where the asterisks indicate user, pwd
CELERY_RESULT_BACKEND = 'rpc://'

# Initialize celery
celery = Celery( 'workerA', 
        broker=CELERY_BROKER_URL,
        backend=CELERY_RESULT_BACKEND)

@celery.task( ignore_result=True)
def add_nums( a, b):
    logger.info( f'{ a+b=}')
    return a+b

My main:

./app.py
import logging
from flask.logging import default_handler
from workerA import add_nums
from workerB import sub_nums
from flask import (
        Flask,
        request,
        jsonify,
    )

logger = logging.getLogger( )
logger.addHandler( default_handler)
logger.setLevel( logging.INFO)

app = Flask( __name__)

@app.route( '/')
def index():
    return 'hello world!'

@app.route( '/add')
def add():
    logger.info( 'in add method')
    first_num, second_num = ( 1,2)

    logger.info( f' { first_num=}')

    result = add_nums.delay( first_num, second_num)
    logger.info( f' {result=}')
    logger.info( f' {result.state=}')

    return jsonify({ 'result': result.result}), 200

@app.route( '/subtract')
def subtract():
    logger.info( 'in sub method')
    first_num, second_num = ( 1,2)

    result = sub_nums.delay( first_num, second_num)
    logger.info( f' {result=}')

    return jsonify( {'result': result.result}), 200

if __name__ == '__main__':
    app.run( debug=True)

Calling result.get( timeout=n) always results in a timeout, regardless of how high n is set: in short these queues are never fulfilled.

For completeness, my docker-compose.yml:

version: "3"
services:
  web:
    build:
      context: .
      dockerfile: Dockerfile
    restart: always
    ports:
      - 5000:5000
    command: python ./app.py -h 0.0.0.0
    depends_on:
      - rabbit
    volumes:
      - .:/app
  rabbit:
    hostname: rabbit
    image: rabbitmq:management
    environment:
      - RABBITMQ_DEFAULT_USER=***
      - RABBITMQ_DEFAULT_PASS=***
    ports:
      - "5673:5672"
      - "15672:15672"
  worker_1:
    build:
      context: .
    hostname: worker_1
    entrypoint: celery
    command: -A workerA worker --loglevel=info -Q workerA
    volumes:
      - .:/app
    links:
      - rabbit
    depends_on:
      - rabbit
  worker_2:
    build:
      context: .
    hostname: worker_2
    entrypoint: celery
    command: -A workerB worker --loglevel=info -Q workerB
    volumes:
      - .:/app
    links:
      - rabbit
    depends_on:
      - rabbit

and my Dockerfile:

FROM python:3

ADD requirements.txt /app/requirements.txt

WORKDIR /app/

RUN pip install -r requirements.txt

EXPOSE 5000

I am using Docker Desktop for Mac 2.2.0.0 and my OSX is 10.15.2 ( Catalina)

any and every help on this issue would be greatly appreciated. These queue problems have now become a serious blocker for me

It would appear that the cause of this problem is because there is no configured backend to store the results. Instantiating Celery object with Celery(..., backend='rpc://') seemingly does nothing other than silence the "NotImplementedError: No result backend is configured" error you would otherwise get. I believe the documentation in this sense to be misleading.

Off to trial a Redis result backend for performance. I also have Elasticsearch and MongoDB in use elsewhere for my application which I could target, but fancy Redis more. Will feed back results when this is done, after lunch.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM