[英]Saving scrapy results into csv file
我編寫的Web搜尋器存在一些問題。 我想保存獲取的數據。 如果我從scrapy教程中正確地理解了,我只需要屈服它,然后通過使用scrapy crawl <crawler> -o file.csv -t csv
來啟動scrapy crawl <crawler> -o file.csv -t csv
對嗎? 由於某種原因,文件保持為空。 這是我的代碼:
# -*- coding: utf-8 -*-
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class PaginebiancheSpider(CrawlSpider):
name = 'paginebianche'
allowed_domains = ['paginebianche.it']
start_urls = ['https://www.paginebianche.it/aziende-clienti/lombardia/milano/comuni.htm']
rules = (
Rule(LinkExtractor(allow=(), restrict_css = ('.seo-list-name','.seo-list-name-up')),
callback = "parse_item",
follow = True),)
def parse_item(self, response):
if(response.xpath("//h2[@class='rgs']//strong//text()") != [] and response.xpath("//span[@class='value'][@itemprop='telephone']//text()") != []):
yield ' '.join(response.xpath("//h2[@class='rgs']//strong//text()").extract()) + " " + response.xpath("//span[@class='value'][@itemprop='telephone']//text()").extract()[0].strip(),
我正在使用python 2.7
如果查看Spider的輸出,將會看到一堆類似以下錯誤消息的記錄:
2018-10-20 13:47:52 [scrapy.core.scraper] ERROR: Spider must return Request, BaseItem, dict or None, got 'tuple' in <GET https://www.paginebianche.it/lombardia/abbiategrasso/vivai-padovani.html>
這意味着您沒有得到正確的結果-您需要dict或Item
,而不是要創建的單項元組。
像這樣簡單的事情應該起作用:
yield {
'name': response.xpath("normalize-space(//h2[@class='rgs'])").get(),
'phone': response.xpath("//span[@itemprop='telephone']/text()").get()
}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.