繁体   English   中英

当蜘蛛出现 scrapy 和 Rabbitmq (pika) 错误时如何重新排队消息

[英]How to requeue the messages when the spider has error with scrapy and Rabbitmq (pika)

我正在尝试使用 pika 和 scrapy 来运行 MQ,并让消费者调用蜘蛛。 我有一个consumer.py和一个 scrapy 蜘蛛spider.py

蜘蛛正在使用生产者发送的参数在消费者中运行。 我使用used_channel.basic_ack(delivery_tag=basic_deliver.delivery_tag)删除消息。

我预计当蜘蛛完成工作时消息会被删除,如果有错误消息应该重新排队。 当蜘蛛正常运行时,一切看起来都很好; 消息被删除,工作完成。 但是,如果在运行爬虫时发生错误,消息仍然会被删除,并且工作没有完成,但消息丢失了。

我查看了 Rabbitmq 管理 UI,发现当蜘蛛仍在运行时消息变为 0(控制台尚未显示工作已完成)。

我想知道是不是因为 scrapy 是异步的? 因此,当这一行run_spider(message=decodebody)仍在运行时,下一行used_channel.basic_ack(delivery_tag=basic_deliver.delivery_tag)不会等到蜘蛛完成。

我怎样才能解决这个问题? 我想在蜘蛛正确完成工作后删除该消息。

from scrapy.utils.project import get_project_settings

setup() # for CrawlerRunner
settings = get_project_settings()

def get_message(used_channel, basic_deliver, properties, body):
    decodebody = bytes.decode(body)

    try:
        run_spider(message=decodebody)
        used_channel.basic_ack(delivery_tag=basic_deliver.delivery_tag)

    except: 
        channel.basic_reject(delivery_tag=basic_deliver.delivery_tag)


def run_spider(message):
    crawler = CrawlerRunner(settings)
    crawler.crawl(MySpider, message=message)


while(True):
    try: 
        # blocking connection
        connection = pika.BlockingConnection(pika.ConnectionParameters(host=rabbit_host))
        channel = connection.channel()
        # declare exchange, the setting must be same as producer
        channel.exchange_declare(
            exchange=rabbit_exchange,
            exchange_type='direct',  
            durable=True,            
            auto_delete=False        
        )
        # declare queue, the setting must be same as producer
        channel.queue_declare(
            queue=rabbit_queue, 
            durable=True, 
            exclusive=False,
            auto_delete=False
        )
        # bind the setting
        channel.queue_bind(
            exchange=rabbit_exchange,
            queue=rabbit_queue,
            routing_key=routing_key
        )

        channel.basic_qos(prefetch_count=1) 
        channel.basic_consume(
            queue=rabbit_queue,
            on_message_callback=get_message,
            auto_ack=False
        )

        logger.info(' [*] Waiting for messages. To exit press CTRL+C')
        # start crawler
        channel.start_consuming()
    
    except pika.exceptions.ConnectionClosed as err:
        print('ConnectionClosed error:', err)
        continue
    # Do not recover on channel errors
    except pika.exceptions.AMQPChannelError as err:
        print("Caught a channel error: {}, stopping...".format(err))
        break
    # Recover on all other connection errors
    except pika.exceptions.AMQPConnectionError as err:    
        print("Connection was closed, retrying...", err)
        continue



我发现有人用 MQ for pika 库处理多线程。 他使用.is_alive检查线程是否完成。 所以,我遵循这个想法。 Scrapy 是多线程的,我添加了 return crawler ,并在删除消息之前检查crawler._active

scrapy.crawler 的源代码

def run_spider(news_info):
    # run spider with CrawlerRunner
    crawler = CrawlerRunner(settings)
    # run the spider script
    crawler.crawl(UrlSpider, news_info=news_info)

    return crawler
crawler = run_spider(news_info=decodebody)
        
# wait until the crawler is done
while (len(crawler._active) > 0):
    time.sleep(1)

used_channel.basic_ack(delivery_tag=basic_deliver.delivery_tag)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM