簡體   English   中英

scrapy請求回調數少於請求數

[英]scrapy request callback count less then request count

我想對詩歌進行分析。

我通過以下步驟進行分析:

  1. 獲取poem list url
  2. 獲取poem detail url
  3. 獲得poem analyze url

但是我發現稱為數量的請求回調小於請求計數。 在我的演示請求中,計數為10,但回調為8。

以下內容是日志:

2016-10-26 16:15:54 [scrapy] INFO: Scrapy 1.2.0 started (bot: poem)
2016-10-26 16:15:54 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'poem.spiders', 'SPIDER_MODULES': ['poem.spiders'], 'ROBOTSTXT_OBEY': True, 'LOG_LEVEL': 'INFO', 'BOT_NAME': 'poem'}
2016-10-26 16:15:54 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-10-26 16:15:54 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-26 16:15:54 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-26 16:15:54 [scrapy] INFO: Enabled item pipelines:
['poem.pipelines.PoemPipeline']
2016-10-26 16:15:54 [scrapy] INFO: Spider opened
2016-10-26 16:15:54 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
poem list count : 10
callback count : 1
item count : 1
callback count : 2
item count : 2
callback count : 3
item count : 3
callback count : 4
item count : 4
callback count : 5
item count : 5
callback count : 6
item count : 6
callback count : 7
item count : 7
callback count : 8
item count : 8
2016-10-26 16:15:55 [scrapy] INFO: Closing spider (finished)
2016-10-26 16:15:55 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 5385,
 'downloader/request_count': 20,  // (10 * 2)
 'downloader/request_method_count/GET': 20,  
 'downloader/response_bytes': 139702,
 'downloader/response_count': 20,
 'downloader/response_status_count/200': 20,
 'dupefilter/filtered': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 10, 26, 8, 15, 55, 416028),
 'item_scraped_count': 8,
 'log_count/INFO': 7,
 'request_depth_max': 2,
 'response_received_count': 20,
 'scheduler/dequeued': 19,
 'scheduler/dequeued/memory': 19,
 'scheduler/enqueued': 19,
 'scheduler/enqueued/memory': 19,
 'start_time': datetime.datetime(2016, 10, 26, 8, 15, 54, 887101)}
2016-10-26 16:15:55 [scrapy] INFO: Spider closed (finished)

IDE:Pycharm
調試:Pycharm中的終端
碼:

############## spider.py ##############
import scrapy
from poem.items import PoemItem


class PoemSpider(scrapy.Spider):

    name = 'poem'

    analyze_count = 0

    start_urls = ['http://so.gushiwen.org/type.aspx']

    def parse(self, response):
        # 1. get poem list url
        poems = response.xpath("//div[@class='typeleft']/div[@class='sons']")
        for poem in poems:
            # 2. get poem detail url
            poem_href = poem.xpath("./p[1]/a/@href").extract_first()
            poem_url = response.urljoin(poem_href)
            yield scrapy.Request(poem_url, callback=self.parse_poem)

    def parse_poem(self, response):
        ## 3. get analyze url
        analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'賞析')).extract_first()
        analyze_url = response.urljoin(analyze_href)
        yield scrapy.Request(analyze_url, callback=self.parse_poem_analyze)

    def parse_poem_analyze(self, response):
        # print analyze callback called count
        print "#####################################"
        PoemSpider.analyze_count = PoemSpider.analyze_count + 1
        print PoemSpider.analyze_count
        poem = PoemItem()
        yield poem

############## pipelines.py ############## 
class PoemPipeline(object):

    processcount = 0

    def process_item(self, item, spider):
        # print item count
        print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
        PoemPipeline.processcount = PoemPipeline.processcount + 1
        print PoemPipeline.processcount
        return item

您的日志缺少stderr,但是您仍然可以通過查看統計信息輸出來了解發生了什么

{'downloader/request_bytes': 5385,
 'downloader/request_count': 20,  // (10 * 2)
 'downloader/request_method_count/GET': 20,  
 'downloader/response_bytes': 139702,
 'downloader/response_count': 20,
 'downloader/response_status_count/200': 20,
 'dupefilter/filtered': 2, <---------------------------------------------
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 10, 26, 8, 15, 55, 416028),
 'item_scraped_count': 8,  <---------------------------------------------
 'log_count/INFO': 7,
 'request_depth_max': 2,
 'response_received_count': 20,
 'scheduler/dequeued': 19,
 'scheduler/dequeued/memory': 19,
 'scheduler/enqueued': 19,
 'scheduler/enqueued/memory': 19,
 'start_time': datetime.datetime(2016, 10, 26, 8, 15, 54, 887101)}

因此dupefilter中間件會過濾掉您發出的兩個請求,該中間件可確保您發出的所有請求都是唯一的(通常只有唯一的url)。

您的問題似乎是在此處創建不安全的url:

analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'賞析')).extract_first()
analyze_url = response.urljoin(analyze_href)

由於analyzer_href可以為空,因此您將獲得analyzer_url == response.url,並且由於您對其進行了爬網而被過濾掉了。
為了避免檢查analyst_href是否為空,然后將其放入URL:

analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'賞析')).extract_first()
if not analyze_href:
    logging.error("failed to find analyze_href for {}".format(response.url))
    return
analyze_url = response.urljoin(analyze_href)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM