简体   繁体   English

运行scrapy教程示例时出现导入错误(scrapy crawl dmoz / scrapy.core.downloader.handlers.s3.S3DownloadHandler)

[英]Import error when running scrapy tutorial example (scrapy crawl dmoz / scrapy.core.downloader.handlers.s3.S3DownloadHandler)

I am running the example from the scrapy tutorial. 我正在从草率教程中运行示例。 I am running Python 2.7.8. 我正在运行Python 2.7.8。 I used pip to download Scrapy and other required packages. 我用pip下载了Scrapy和其他必需的软件包。 I believe I followed the tutorial properly but I am unable to run the spider. 我相信我正确地遵循了本教程,但是我无法运行Spider。 I have read previous posts about the same issue others had but still have not been able to fix the issue. 我已经阅读了以前有关其他人的同一问题的帖子,但仍然无法解决该问题。

I appreciate any help. 感谢您的帮助。

C:\tutorial>scrapy crawl dmoz
2014-10-22 02:14:56-0400 [scrapy] INFO: Scrapy 0.24.4 started (bot: tutorial)
2014-10-22 02:14:56-0400 [scrapy] INFO: Optional features available: ssl, http11
2014-10-22 02:14:56-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders'
, 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2014-10-22 02:14:58-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, We
bService, CoreStats, SpiderState
Traceback (most recent call last):
  File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "C:\Python27\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "C:\Python27\Scripts\scrapy.exe\__main__.py", line 9, in <module>
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 89, in _run_print_help
    func(*a, **kw)
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "C:\Python27\lib\site-packages\scrapy\commands\crawl.py", line 60, in run
    self.crawler_process.start()
  File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 92, in start
    if self.start_crawling():
  File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 124, in start_crawling
    return self._start_crawler() is not None
  File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 139, in _start_crawler
    crawler.configure()
  File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 47, in configure
    self.engine = ExecutionEngine(self, self._spider_closed)
  File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 64, in __init__
    self.downloader = downloader_cls(crawler)
  File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", line 73, in __init__
    self.handlers = DownloadHandlers(crawler)
  File "C:\Python27\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 22, in __in
it__
    cls = load_object(clspath)
  File "C:\Python27\lib\site-packages\scrapy\utils\misc.py", line 42, in load_object
    raise ImportError("Error loading object '%s': %s" % (path, e))
ImportError: Error loading object 'scrapy.core.downloader.handlers.s3.S3DownloadHandler': No module
named win32api

As per Scrapy documentation you were suppose to install OpenSSL before installing Scrapy using following steps :- 根据Scrapy文档,假设您要使用以下步骤在安装Scrapy之前安装OpenSSL

install OpenSSL by following these steps:
1. go to Win32 OpenSSL page
2. download Visual C++ 2008 redistributables for your Windows and architecture
3. download OpenSSL for your Windows and architecture (the regular version, not the light one)
4. add the c:\openssl-win32\bin (or similar) directory to your PATH, the same way you added python27 in the first step`` in the first step

See platform specific installation steps here. 请参阅此处的平台特定安装步骤。 Scrapy for windows 刮擦窗户

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM