簡體   English   中英

XPath表達式無法在Scrapy Spider中工作,但它是Scrapy Shell

[英]XPath expression not working in scrapy spider but it is scrapy shell

我正在做一個小項目,以解決問題,而我的xpath遇到了問題。

xpath在scrapy shell和javascript中等效的容器中工作,但是當我將它放在spider.py文件中時,它不起作用。 我是否需要更改spider.py文件中的某些xpath?

我運行以下scrapy shell。

scrapy shell -s USER_AGENT='Safari/537.36' 'https://www.gumtree.com/search?q=iphone+6'

response.xpath('//div[@class="listing-content"]//meta[@itemprop="price"]/@content').extract()

正確地返回價格。 但是,當我將其放在spider.py文件中時,它什么也不返回。 spider.py如下:

import scrapy

from phone_scraper.items import PhoneScraperItem


class PhoneSpider(scrapy.Spider):

    """Docstring for PhoneSpider. """

    name = "phone"
    allowed_domains = ["gumtree.com"]
    start_urls = [
        "https://www.gumtree.com/search?q=iphone+6"
    ]

    def parse(self, response):
        item = PhoneScraperItem()
        item['price'] = response.xpath('//div[@class="listing-content"]//meta[@itemprop="price"]/@content').extract()

然后在終端中使用以下命令運行它:

scrapy crawl phone -o items.json

要在控制台中獲得此輸出:

2016-08-16 23:44:35 [scrapy] INFO: Scrapy 1.1.1 started (bot: phone_scraper)
2016-08-16 23:44:35 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'phone_scraper.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['phone_scraper.spiders'], 'BOT_NAME': 'phone_scraper', 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36', 'FEED_FORMAT': 'json'}
2016-08-16 23:44:35 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-08-16 23:44:35 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-08-16 23:44:35 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-08-16 23:44:35 [scrapy] INFO: Enabled item pipelines:
[]
2016-08-16 23:44:35 [scrapy] INFO: Spider opened
2016-08-16 23:44:35 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-08-16 23:44:35 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-08-16 23:44:35 [scrapy] DEBUG: Crawled (200) <GET https://www.gumtree.com/search?q=iphone+6> (referer: None)
2016-08-16 23:44:35 [scrapy] INFO: Closing spider (finished)
2016-08-16 23:44:35 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 305,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 51936,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 8, 16, 22, 44, 35, 887734),
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 8, 16, 22, 44, 35, 407480)}
2016-08-16 23:44:35 [scrapy] INFO: Spider closed (finished)

編輯:后續問題

我有一個使用do循環獲取上述信息的后續問題。 我希望這是提出問題的正確方法。

我已將解析函數從上面更改為以下內容,現在它不再返回任何內容。

def parse(self, response):
    for sel in response.xpath('//div[@class="listing-content"]'):
        item = PhoneScraperItem()
        item['price'] = sel.xpath('meta[@item-prop="price"]/@content').extract()
        yield item

我是否錯誤地連接了XPath?

您忘記parse()回調返回項目

def parse(self, response):
    item = PhoneScraperItem()
    item['price'] = response.xpath('//div[@class="listing-content"]//meta[@itemprop="price"]/@content').extract()
    return item

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM