简体   繁体   English

为什么我的沙皮蜘蛛不刮任何东西?

[英]Why does my scrapy spider not scrape anything?

I don't know where the issues lies probably super easy to fix since I am new to scrapy. 我不知道问题所在可能很容易解决,因为我是新手。 Thanks for your help! 谢谢你的帮助!

My Spider: 我的蜘蛛:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.linkextractors import LinkExtractor
from scrapy.item import Item

class ArticleSpider(CrawlSpider):
    name = "article"
    allowed_domains = ["economist.com"]
    start_urls = ['http://www.economist.com/sections/science-technology']

    rules = [
      Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True),
    ]

    def parse_item(self, response):
        for sel in response.xpath('//div/article'):
            item = scrapy.Item()
            item ['title'] = sel.xpath('a/text()').extract()
            item ['link'] = sel.xpath('a/@href').extract()
            item ['desc'] = sel.xpath('text()').extract()
            return item

Items: 项目:

import scrapy

class EconomistItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

Part of Log: 日志部分:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Crawled (200) <GET http://www.economist.com/sections/science-technology> (referer: None)

Edit: 编辑:

After I added the changes proposed by alecxe another problem occured: 在添加alecxe提出的更改后,出现了另一个问题:

Log: 日志:

[scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology)
2016-02-04 14:05:01 [scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21689501-beating-go-champion-machine-learning-computer-says-go> (referer: http://www.economist.com/sections/science-technology)
2016-02-04 14:05:02 [scrapy] ERROR: Spider error processing <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
    for x in result:
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiders/crawl.py", line 67, in _parse_response
    cb_res = callback(response, **cb_kwargs) or ()
  File "/Users/FvH/Desktop/Python/projects/economist/economist/spiders/article.py", line 18, in parse_item
    item = scrapy.Item()
NameError: global name 'scrapy' is not defined

Settings: 设置:

BOT_NAME = 'economist'

    SPIDER_MODULES = ['economist.spiders']
    NEWSPIDER_MODULE = 'economist.spiders'
    USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36"

And if I want export the data into a csv file it's obviously just empty. 而且,如果我想将数据导出到一个csv文件中,那显然是空的。

Thanks 谢谢

parse_item is not correctly indented, should be: parse_item没有正确缩进,应为:

class ArticleSpider(CrawlSpider):
    name = "article"
    allowed_domains = ["economist.com"]
    start_urls = ['http://www.economist.com/sections/science-technology']

    rules = [
      Rule(LinkExtractor(allow=r'Items'), callback='parse_item', follow=True),
    ]

    def parse_item(self, response):
        for sel in response.xpath('//div/article'):
            item = scrapy.Item()
            item ['title'] = sel.xpath('a/text()').extract()
            item ['link'] = sel.xpath('a/@href').extract()
            item ['desc'] = sel.xpath('text()').extract()
            return item

Two things to fix aside from that: 除此之外,还有两点要解决:

  • the link extracting part should be fixed to match the article links: 链接提取部分应固定为与文章链接匹配:

     Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True), 
  • you need to specify the USER_AGENT setting to pretend to be a real browser. 您需要指定USER_AGENT设置以假装为真正的浏览器。 Otherwise, the response would not contain the list of articles: 否则, response中将不包含文章列表:

     USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36" 

You imported only Item (not all the scrapy module): 您仅导入了Item(不是所有的scrapy模块):

from scrapy.item import Item

So instead of using scrapy.Item here: 因此,这里不要使用scrapy.Item:

for sel in response.xpath('//div/article'):
        item = scrapy.Item()
        item ['title'] = sel.xpath('a/text()').extract()

You should use just Item: 您应该只使用Item:

for sel in response.xpath('//div/article'):
        item = Item()
        item ['title'] = sel.xpath('a/text()').extract()

Or import your own item for using it. 或导入自己的物品以供使用。 This should work (don't forget to replace project_name with name of your project): 这应该可以工作(不要忘记用项目名称替换project_name):

from project_name.items import EconomistItem
...
for sel in response.xpath('//div/article'):
        item = EconomistItem()
        item ['title'] = sel.xpath('a/text()').extract()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM