簡體   English   中英

Python scrapy spider找不到KeyError

[英]Python scrapy spider not found KeyError

幾天后,我一直在嘗試在Scrapy中創建一個爬蟲,並且每個項目都會出現同樣的錯誤: spider not found 無論我做了什么更改或我遵循哪個教程,它總是返回相同的錯誤。

有人可以建議我應該在哪里尋找錯誤嗎?

謝謝!

Windows 10,python 2.7

C:.
│   scrapy.cfg
│
└───scrapscrapy
    │   items.py
    │   middlewares.py
    │   pipelines.py
    │   settings.py
    │   settings.pyc
    │   __init__.py
    │   __init__.pyc
    │
    └───spiders
            SSSpider.py
            SSSpider.pyc

items.py

from scrapy.item import Item, Field

class ScrapscrapyItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    Heading = Field()
    Content = Field()
    Source_Website = Field()
    pass

SSSpider.py

from scrapy.selector import Selector
from scrapy.spider import Spider
from Scrapscrapy.items import ScrapscrapyItem

class ScrapscrapySpider(Spider):
    name="ss"
    allowed_domains = ["yellowpages.md/rom/companies/info/2683-intelsmdv-srl"]
    start_url = ['http://yellowpages.md/rom/companies/info/2683-intelsmdv-srl/']

    def parse(self, response) : 
        sel = Selector (response)
        item = ScrapscrapyItem()
        item['Heading']=sel.xpath('/html/body/div[2]/div[2]/div/div/div/div/div[1]/div/div[2]/div/article/div/div[1]/div[2]/h2').extract
        item['Content']=sel.xpath('/html/body/div[2]/div[2]/div/div/div/div/div[1]/div/div[2]/div/article/div/div[1]/div[2]/div[2]/div/div[2]/div/div[1]/div[1]').extract
        item['Source_Website']= 'yellowpages.md/rom/companies/info/2683-intelsmdv-srl'
        return item

設置

BOT_NAME = 'scrapscrapy'

SPIDER_MODULES = ['scrapscrapy.spiders']
NEWSPIDER_MODULE = 'scrapscrapy.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'scrapscrapy (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

命令行:

C:\Users\nastea\Desktop\scrapscrapy>scrapy crawl ss
c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py:37: RuntimeWarning:
Traceback (most recent call last):
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 31, in _load_all_spiders
    for module in walk_modules(name):
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\utils\misc.py", line 63, in walk_modules
    mod = import_module(path)
  File "c:\python27\lib\importlib\__init__.py", line 37, in import_module
    __import__(name)
ImportError: No module named spiders
Could not load spiders from module 'scrapscrapy.spiders'. Check SPIDER_MODULES setting
  warnings.warn(msg, RuntimeWarning)
2017-02-19 14:21:16 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapscrapy)
2017-02-19 14:21:16 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapscrapy.spiders', 'SPIDER_MODULES': ['scrapscrapy.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapscrapy'}
Traceback (most recent call last):
  File "c:\python27\Scripts\scrapy-script.py", line 11, in <module>
    load_entry_point('scrapy==1.3.2', 'console_scripts', 'scrapy')()
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\commands\crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 51, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: ss'

編輯

正如eLRuLL建議我將_init_.py文件添加到spider文件夾中,也將scrapy.spider更改為scrapy.spiders,因為它告訴我它已被刪除。 現在結果cmd返回的是:

C:\Users\nastea\Desktop\scrapscrapy>scrapy crawl ss
c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py:37: RuntimeWarning:
Traceback (most recent call last):
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 31, in _load_all_spiders
    for module in walk_modules(name):
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\utils\misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "c:\python27\lib\importlib\__init__.py", line 37, in import_module
    __import__(name)
  File "C:\Users\nastea\Desktop\scrapscrapy\scrapscrapy\spiders\SSSpider.py", line 3, in <module>
    from Scrapscrapy.items import ScrapscrapyItem
ImportError: No module named Scrapscrapy.items
Could not load spiders from module 'scrapscrapy.spiders'. Check SPIDER_MODULES setting
  warnings.warn(msg, RuntimeWarning)
2017-02-19 15:13:36 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapscrapy)
2017-02-19 15:13:36 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapscrapy.spiders', 'SPIDER_MODULES': ['scrapscrapy.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapscrapy'}
Traceback (most recent call last):
  File "c:\python27\Scripts\scrapy-script.py", line 11, in <module>
    load_entry_point('scrapy==1.3.2', 'console_scripts', 'scrapy')()
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\commands\crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "c:\python27\lib\site-packages\scrapy-1.3.2-py2.7.egg\scrapy\spiderloader.py", line 51, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: ss'

看起來像spiders文件夾中的__init__.py文件發生了什么。

嘗試自己添加(留空):

───spiders
        __init__.py
        SSSpider.py
        SSSpider.pyc

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM