简体   繁体   中英

RabbitMQ with Django Project and Scrapy to Push and Comume Message's

I've Django application that is working with Celery and using RabbitMQ as message broker. I've separate project for scrapy from where I scraped data and want to push this scrapped data into rabbitMQ then django will consume this RabbitMQ message's through celery. I need help to consume the message that pushed in rabbitMQ from scrapy project.

code snippet.

scrapy

def process_item(self, item, spider):
    publish_message(item)    
    return item

def publish_message(data):
    import pika
    connection = pika.BlockingConnection(
        pika.ConnectionParameters(host='localhost', port=5672))
    channel = connection.channel()
    channel.basic_publish(exchange='topic', routing_key='scrapy', body='Hello From 
     scrapy!')
    connection.close()

In django app, consumers.py

import pika


connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', heartbeat=600,
                                                               blocked_connection_timeout=300))
channel = connection.channel()

def callback(ch, method, properties, body):
    print(" data =============== ", data)
    # I will call celery task here once code print the data to make sure its running. unfortunately its not running. :( 
    return 


channel.basic_consume(queue='scrapy', on_message_callback=callback, auto_ack=True)
print("Started Consuming...")
channel.start_consuming()
connection.close()

celery.py

from kombu import Exchange, Queue

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_project.settings.development')

celery_app = Celery('my_project', broker='amqp://guest:guest@rabbit:5672', backend='rpc://0.0.0.0:5672')
celery_app.config_from_object(f'django.conf:settings', namespace='CELERY')
celery_app.autodiscover_tasks()

celery_app.conf.update(
    worker_max_tasks_per_child=1,
    broker_pool_limit=None
)

default_exchange = Exchange('default', type='topic')
scrapy_exchange = Exchange('scrapy', type='topic')

celery_app.conf.task_queues = (
Queue('scrapy', scrapy_exchange, routing_key='scrapy.#'),
)

You didn't declare a queue when consuming. try this:

Publisher

def process_item(self, item, spider):
publish_message(item)    
return item

def publish_message(data):
    import pika
    connection = pika.BlockingConnection(
        pika.ConnectionParameters(host='localhost', port=5672))
    channel = connection.channel()
    channel.exchange_declare(exchange='topic')
    channel.basic_publish(exchange='topic', routing_key='scrapy', body='Hello From 
     scrapy!')
    connection.close()

Consumer

import pika


connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', heartbeat=600,
                                                               blocked_connection_timeout=300))
channel = connection.channel()

def callback(ch, method, properties, body):
    print(" data =============== ", body)
    # I will call celery task here once code print the data to make sure its running. unfortunately its not running. :( 
    return 
channel.exchange_declare(exchange='topic')
channel.queue_declare(queue='scrapy')
channel.queue_bind(exchange='topic', queue='scrapy', routing_key='scrapy')
channel.basic_consume(queue='scrapy', on_message_callback=callback, auto_ack=True)
print("Started Consuming...")
channel.start_consuming()
connection.close()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM