简体   繁体   中英

Scrapy first tutorial dmoz returning en error “TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead.”

Getting an error when running the first tutorial for scrapy.
Scrapy : 0.22.2
lxml : 3.3.5.0
libxml2 : 2.7.8
Twisted : 12.0.0
Python : 2.7.2 (default, Oct 11 2012, 20:14:37) - [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)]
Platform: Darwin-12.5.0-x86_64-i386-64bit

This is my file items.py:

from scrapy.item import Item, Field
class DmozItem(Item)
    title=Field()
    link=Field()
    desc=Field()

My file for dmoz_spider.py: from scrapy.spider import BaseSpider

class DmozSpider(BaseSpider):
    name = "dmoz"
    allowed_domains= ["dmoz.org"]
    start_urls = [
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]       

    def parse(self, response):
            filename = response.url.split("/")[-2]
            open(filename, 'wb').write(response.body)

This is the error message when running "scrapy crawl dmoz"

foolios-imac-2:tutorial foolio$ scrapy crawl dmoz /usr/local/share/tutorial/tutorial/spiders/dmoz_spider.py:3: ScrapyDeprecationWarning: tutorial.spiders.dmoz_spider.DmozSpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others) class DmozSpider(BaseSpider):

2014-06-19 14:53:00-0500 [scrapy] INFO: Scrapy 0.22.2 started (bot: tutorial)
2014-06-19 14:53:00-0500 [scrapy] INFO: Optional features available: ssl, http11
2014-06-19 14:53:00-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE':'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'} 2014-06-19 14:53:00-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole,CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):

File "/usr/local/bin/scrapy", line 5, in pkg_resources.run_script('Scrapy==0.22.2', 'scrapy')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 489, in run_script self.require(requires)[0].run_script(script_name, ns)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1207, in run_script execfile(script_filename, namespace, namespace)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in execute()
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 143, in execute _run_print_help(parser, _run_command, cmd, args, opts)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 89, in _run_print_help func(*a, **kw)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 150, in _run_command cmd.run(args, opts)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/commands/crawl.py", line 50, in run self.crawler_process.start()
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 92, in start if self.start_crawling():
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 124, in start_crawling return self._start_crawler() is not None
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 139, in _start_crawler crawler.configure()
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 47, in configure self.engine = ExecutionEngine(self, self._spider_closed)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/engine.py", line 63, in init self.downloader = Downloader(crawler)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/ init .py", line 73, in init self.handlers = DownloadHandlers(crawler)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/ init .py", line 18, in init cls = load_object(clspath)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/utils/misc.py", line 40, in load_object mod = import_module(module)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/ init .py", line 37, in import_module import (name)
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/s3.py", line 4, in from .http import HTTPDownloadHandler
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/http.py", line 5, in from .http11 import HTTP11DownloadHandler as HTTPDownloadHandler
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/http11.py", line 15, in from scrapy.xlib.tx import Agent, ProxyAgent, ResponseDone, \\
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/ init .py", line 6, in from . import client, endpoints
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/client.py", line 37, in from .endpoints import TCP4ClientEndpoint, SSL4ClientEndpoint
File "/Library/Python/2.7/site-packages/Scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/endpoints.py", line 222, in interfaces.IProcessTransport, '_process')):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/zope/interface/declarations.py", line 495, in call raise TypeError("Can't use implementer with classes. Use one of "
TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead.

Try updating zope and then run your code

sudo pip install --upgrade zope.interface

or

sudo easy_install --upgrade zope.interface

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM