繁体   English   中英

如果没有网址爬网,请抓紧蜘蛛

[英]Scrapy close spider if no urls to crawl

我有一个蜘蛛,它从redis列表中获取URL。

当找不到网址时,我想很好地关闭Spider。 我尝试实现CloseSpider异常,但似乎还没有达到这一点

def start_requests(self):
    while True:
        item = json.loads(self.__pop_queue())
        if not item:
            raise CloseSpider("Closing spider because no more urls to crawl")
        try:
            yield scrapy.http.Request(item['product_url'], meta={'item': item})
        except ValueError:
            continue

即使我提出了CloseSpider异常,但仍然出现以下错误:

root@355e42916706:/scrapper# scrapy crawl general -a country=my -a log=file
2017-07-17 12:05:13 [scrapy.core.engine] ERROR: Error while obtaining start requests
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 127, in _next_request
    request = next(slot.start_requests)
  File "/scrapper/scrapper/spiders/GeneralSpider.py", line 20, in start_requests
    item = json.loads(self.__pop_queue())
  File "/usr/local/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer

此外,我还尝试在同一函数中捕获TypeError,但它也不起作用。

有没有建议的方法来处理这个

谢谢

您需要先检查self.__pop_queue()返回某些内容,然后再将其提供给json.loads() (或在调用它时捕获TypeError ),例如:

def start_requests(self):
    while True:
        item = self.__pop_queue()
        if not item:
            raise CloseSpider("Closing spider because no more urls to crawl")
        try:
            item = json.loads(item)
            yield scrapy.http.Request(item['product_url'], meta={'item': item})
        except (ValueError, TypeError):  # just in case the 'item' is not a string or buffer
            continue

我遇到了同样的问题,发现了一个小窍门。 当蜘蛛处于空闲状态时 (什么都不做),我检查redis队列中是否还剩下一些东西。 如果没有,我用close_spider关闭蜘蛛。 以下代码位于spider类中:

@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
    from_crawler = super(SerpSpider, cls).from_crawler
    spider = from_crawler(crawler, *args, **kwargs)
    crawler.signals.connect(spider.idle, signal=scrapy.signals.spider_idle)
    return spider


def idle(self):
    if self.q.llen(self.redis_key) <= 0:
        self.crawler.engine.close_spider(self, reason='finished')

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM