簡體   English   中英

在scrapy中運行多個蜘蛛 - 找不到蜘蛛

[英]Running multiple spiders in scrapy - spider not found

正如標題所暗示的,我試圖在scrapy中使用多個蜘蛛。 一只蜘蛛, news_spider使用命令工作

scrapy crawl news_spider -o news.json 它產生了我期望的確切結果。

但是,當我嘗試使用以下命令使用蜘蛛網名時

scrapy crawl quotes_spider -o quotes.json

我收到以下消息,“未找到蜘蛛:quotes_spider

只是為了一些歷史,我首先創建了quotes_spider並且它正在工作。 然后我將它復制為news_spider並進行編輯,此時我將quotes_spider從蜘蛛目錄中移出。 現在我已經讓news_spider工作了,我將quotes_spider移回了蜘蛛目錄並得到了上面的錯誤消息。

目錄樹看起來像這樣

tutorial
├── news.json
├── scrapy.cfg
└── tutorial
    ├── __init__.py
    ├── __pycache__
    │   ├── __init__.cpython-37.pyc
    │   ├── items.cpython-37.pyc
    │   └── settings.cpython-37.pyc
    ├── items.py
    ├── middlewares.py
    ├── pipelines.py
    ├── quotes.jl
    ├── quotes.json
    ├── settings.py
    └── spiders
        ├── __init__.py
        ├── __pycache__
        │   ├── __init__.cpython-37.pyc
        │   ├── news_spider.cpython-37.pyc
        │   └── quotes_spider.cpython-37.pyc
        ├── news_spider.py
        └── quotes_spider.py

新聞蜘蛛:

from scrapy.exporters import JsonLinesItemExporter
from tutorial.items import TutorialItem

# Scrapy Spider
class FinNewsSpider(scrapy.Spider):
    # Initializing log file
    # logfile("news_spider.log", maxBytes=1e6, backupCount=3)
    name = "news_spider"
    allowed_domains = ['benzinga.com/']
    start_urls = [
        'https://www.benzinga.com/top-stories/20/09/17554548/stock-wars-ford-vs-general-motors-vs-tesla'
    ]

# MY SCRAPY STUFF
# response.xpath('//div[@class="article-content-body-only"]/p/text()').extract()
    def parse(self, response):
        paragraphs = response.xpath('//div[@class="article-content-body-only"]/p/text()').extract()
        print(paragraphs)
        for p in paragraphs:
            yield TutorialItem(content=p)

行情蜘蛛:

from scrapy.exporters import JsonLinesItemExporter

class QuotesSpider(scrapy.Spider):
    name = "quotes"

#### Actually don't have to use the start_requests function since it's built in. Can just use start_urls
    # def start_requests(self):
    #     urls = [
    #         'http://quotes.toscrape.com/page/1/',
    #         'http://quotes.toscrape.com/page/2/'
    #     ]
    #     for url in urls:
    #         yield scrapy.Request(url=url, callback=self.parse)
    start_urls = [
        'http://quotes.toscrape.com/page/1/',
        'http://quotes.toscrape.com/page/2/'
    ]

#### Original parse to just get the entire page
    # def parse(self, response):
    #     page = response.url.split("/")[-2]
    #     filename = 'quotes-%s.html' % page
    #     with open(filename, 'wb') as f:
    #         f.write(response.body)
    #     self.log('Saved file %s' % filename)

#### Parse to actually gather targeted info
    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                'text': quote.css("span.text::text").get(),
                'author': quote.css("small.author::text").get(),
                'tags': quote.css("div.tags a.tag::text").getall()
            }

        next_page = response.css("li.next a::attr(href)").get()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, callback=self.parse)

我已經搜索過 SO,我找到的關於多個蜘蛛的答案似乎都與同時運行多個蜘蛛有關,這不是我想要做的,所以我沒有找到為什么其中一個有效而另一個無效的答案。 任何人都可以在我的代碼中看到我可能忽略的錯誤嗎?

問題是你如何執行它。 你引用蜘蛛的名字是“quotes”而不是“quotes_spider”

class QuotesSpider(scrapy.Spider):
    name = "quotes"

因此運行它的命令是:

scrapy crawl quotes -o quotes.json

就像你的新聞蜘蛛的名字是“news_spider”

class FinNewsSpider(scrapy.Spider):
    # Initializing log file
    # logfile("news_spider.log", maxBytes=1e6, backupCount=3)
    name = "news_spider"

你執行它

scrapy crawl news_spider -o news.json

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM