繁体   English   中英

scrapy新手:教程。 运行scrapy crawl dmoz时出错

[英]scrapy newbie: tutorial. error when running scrapy crawl dmoz

我已经设置了PATH变量,我认为我正在配置一切正确。 但是当我在startproject文件夹中运行“scrapy crawl dmoz”时,我收到以下错误消息:

c:\matt\testing\dmoz>scrapy crawl dmoz
2012-04-24 18:12:56-0400 [scrapy] INFO: Scrapy 0.14.0.2841 started (bot: dmoz)
2012-04-24 18:12:56-0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole,         
CloseSpider, WebService, CoreStats, SpiderState
2012-04-24 18:12:56-0400 [scrapy] DEBUG: Enabled downloader middlewares:    
HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware,
faultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware,   
HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-04-24 18:12:56-0400 [scrapy] DEBUG: Enabled spider middlewares:   
HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware,  DepthMiddware
2012-04-24 18:12:56-0400 [scrapy] DEBUG: Enabled item pipelines:
Traceback (most recent call last):
File "c:\Python27\Scripts\scrapy", line 4, in <module>
execute()
File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7- 
win32.egg\scrapy\cmdline.py", line 132, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7-
win32.egg\scrapy\cmdline.py", line 97, in _run_print_help
func(*a, **kw)
File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7-
win32.egg\scrapy\cmdline.py", line 139, in _run_command
cmd.run(args, opts)
File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7-
win32.egg\scrapy\commands\crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "c:\Python27\lib\site-packages\scrapy-0.14.0.2841-py2.7-  
win32.egg\scrapy\spidermanager.py", line 43, in create
raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: dmoz'

有谁知道可能会发生什么?

我也有这个问题。

这是因为scrapy教程要求你将你创建的蜘蛛放在/dmoz/spiders/但scrapy正在查看tutorial/tutorial/spiders

dmoz_spider.py保存在tutorial/tutorial/spiders ,爬行应该可以正常工作。

尝试在命令行中

C:\Users\Akhtar Wahab> python # any any directory path

如果有效

尝试

scrapy version

如果它也有效

然后确保你做了scrapy项目

scrapy startproject name

如果以上一切都对你有利

然后确保在scrapy.cfg所在的目录中运行scraper命令。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM