简体   繁体   English

python中的Scrapy Crawler无法跟随链接吗?

[英]Scrapy Crawler in python cannot follow links?

I wrote a crawler in python using the scrapy tool of python. 我使用python的scrapy工具在python中编写了一个搜寻器。 The following is the python code: 以下是python代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
#from scrapy.item import Item
from a11ypi.items import AYpiItem

class AYpiSpider(CrawlSpider):
        name = "AYpi"
        allowed_domains = ["a11y.in"]
        start_urls = ["http://a11y.in/a11ypi/idea/firesafety.html"]

        rules =(
                Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
                )

        def parse_item(self,response):
                #filename = response.url.split("/")[-1]
                #open(filename,'wb').write(response.body)
                #testing codes ^ (the above)

                hxs = HtmlXPathSelector(response)
                item = AYpiItem()
                item["foruri"] = hxs.select("//@foruri").extract()
                item["thisurl"] = response.url
                item["thisid"] = hxs.select("//@foruri/../@id").extract()
                item["rec"] = hxs.select("//@foruri/../@rec").extract()
                return item

But, instead of following the links the error thrown is: 但是,引发的错误不是跟随链接,而是:

Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/commands/crawl.py", line 45, in run
    q.append_spider_name(name, **opts.spargs)
--- <exception caught here> ---
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/queue.py", line 89, in append_spider_name
    spider = self._spiders.create(name, **spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/spidermanager.py", line 36, in create
    return self._spiders[spider_name](**spider_kwargs)
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 38, in __init__
    self._compile_rules()
  File "/usr/lib/python2.6/site-packages/Scrapy-0.12.0.2538-py2.6.egg/scrapy/contrib/spiders/crawl.py", line 82, in _compile_rules
    self._rules = [copy.copy(r) for r in self.rules]
exceptions.TypeError: 'Rule' object is not iterable

Can someone please explain to me what's going on? 有人可以告诉我发生了什么吗? Since this is the stuff mentioned in the documentation and I leave the allow field blank, that itself should make follow True by default. 由于这是文档中提到的内容,并且我将allow字段留为空白,因此默认情况下,其本身应遵循True。 So why the error? 那么为什么会出错呢? What kind of optimisations can I make with my crawler to make it fast? 我可以对自己的履带进行快速优化吗?

From what I see, it looks like your rule is not an iterable. 从我的角度来看,您的规则似乎不是可重复的。 It looks like you were trying to make rules a tuple, you should read up on tuples in the python documentation . 看来您正在尝试将规则设为元组,应该在python文档中阅读元组

To fix your problem, change this line: 要解决您的问题,请更改以下行:

    rules =(
            Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item')
            )

To: 至:

    rules =(Rule(SgmlLinkExtractor(allow = ()) ,callback = 'parse_item'),)

Notice the comma at the end? 注意结尾的逗号吗?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM