简体   繁体   English

Scrapy:Python无法找到蜘蛛

[英]Scrapy: Python cannot find the spider

I'm trying to follow the Scrapy tutorial, but I'm stuck in one of the first step. 我正在尝试遵循Scrapy教程,但是我陷入了第一步。 I think I have correctly created the spider: 我想我已经正确创建了蜘蛛:

class dmoz(BaseSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

def parse(self, response):
    filename = response.url.split("/")[-2]
    open(filename, 'wb').write(response.body)

I have saved that (as dmoz_spider.py) from the IDLE-shell typing the .py extension in a given folder, which corresponds with the directory of the terminal window. 我已经从IDLE外壳程序(在与终端窗口目录相对应的给定文件夹中键入.py扩展名)保存了该文件(作为dmoz_spider.py)。

However, when I type scrapy crawl dmoz I get this: 但是,当我输入scrapy crawl dmoz我得到了:

2013-08-09 19:18:06+0200 [scrapy] INFO: Scrapy 0.16.5 started (bot: dmoz)
2013-08-09 19:18:07+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-08-09 19:18:08+0200 [scrapy] DEBUG: Enabled item pipelines: 
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/bin/scrapy", line 5, in <module>
    pkg_resources.run_script('Scrapy==0.16.5', 'scrapy')
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 499, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 1235, in run_script
    execfile(script_filename, namespace, namespace)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module>
    execute()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 76, in _run_print_help
    func(*a, **kw)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 43, in create
    raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: dmoz'

I cannot understand what is wrong, but given that I'm quite new to programming, it might be a very easy thing. 我不明白哪里出了问题,但是鉴于我是编程的新手,这可能很容易。

You need to be in the directory that contains scrapy.cfg : 您需要位于包含scrapy.cfg的目录中:

stav@maia:/srv/scrapy/tutorial$ ls
scrapy.cfg  tutorial/

Here is a tree listing of the files in the project on my system: 这是我系统上项目中文件的树状列表:

stav@maia:/srv/scrapy/tutorial$ tree
.
├── scrapy.cfg
└── tutorial
    ├── __init__.py
    ├── items.py
    ├── pipelines.py
    ├── settings.py
    └── spiders
        ├── dmoz_spider.py
        └── __init__.py

2 directories, 13 files

You should show us the entire command line that you use to execute the command, including the working directory: 您应该向我们展示用于执行命令的整个命令行,包括工作目录:

stav@maia:/srv/scrapy/tutorial$ scrapy crawl dmoz
2013-08-11 11:00:23-0500 [scrapy] INFO: Scrapy 0.17.0 started (bot: tutorial)
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Optional features available: ssl, django, http11, boto, libxml2
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'USER_AGENT': 'tutorial/1.0', 'BOT_NAME': 'tutorial'}
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Enabled item pipelines:
2013-08-11 11:00:23-0500 [dmoz] INFO: Spider opened
2013-08-11 11:00:23-0500 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-08-11 11:00:23-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-08-11 11:00:24-0500 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2013-08-11 11:00:24-0500 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2013-08-11 11:00:24-0500 [dmoz] INFO: Closing spider (finished)
2013-08-11 11:00:24-0500 [dmoz] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 486,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 12980,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 8, 11, 16, 0, 24, 101947),
     'log_count/DEBUG': 10,
     'log_count/INFO': 4,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2013, 8, 11, 16, 0, 23, 408890)}
2013-08-11 11:00:24-0500 [dmoz] INFO: Spider closed (finished)

If the above solutions do not work then :: 如果上述解决方案不起作用,则::

open the settings.py in the tutorial folder and make the following changes 打开教程文件夹中的settings.py并进行以下更改

BOT_NAME = 'dmoz'

Change the name of BOT_NAME from 'tutorial' with the one you have defined explicitly in your dmoz_spider.py file. 使用您在dmoz_spider.py文件中明确定义的名称,将“ BOT_NAME 'tutorial'的名称从'tutorial'更改为'tutorial'

Are you running in a Virtualenv? 您在Virtualenv中运行吗? If so, please do a pip freeze and show us if you have all the scrapy dependencies installed 如果是这样,请pip freeze一个pip freeze并向我们​​显示是否已安装所有scrapy依赖项

The code is fine, I just copy-pasted you code and ran it with no problem. 代码很好,我只是复制粘贴了您的代码,然后就可以毫无问题地运行它。 Also, you should be able to run the spider from anywhere in your scrapy project folders. 同样,您应该能够在scrapy项目文件夹中的任何位置运行spider。

Please make sure dmoz_spider.py is in sub directory 'spiders' 请确保dmoz_spider.py位于子目录“ spiders”中

mv dmoz_spider.py spiders/. mv dmoz_spider.py蜘蛛/。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM