簡體   English   中英

有什么辦法可以通過腳本更改刮y蜘蛛的名字

[英]Is there any way to change scrapy spider's name by script

我做了一個scrapy-redis搜尋器,並決定做一個分布式搜尋器。 更重要的是,我想使其成為一種基於任務的名稱。 因此,我打算將Spider的名稱更改為Task的名稱,並使用該名稱區分每個任務。 因此,我遇到了一個問題,即在運行Web管理期間如何更改Spider的名稱。

這是我的代碼,而且不成熟:

#-*- encoding: utf-8 -*-
import redis
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy_redis.spiders import RedisSpider
import pymongo
client = pymongo.MongoClient('mongodb://localhost:27017')
db_name = 'news'
db = client[db_name]

class NewsSpider(RedisSpider):
    """Spider that reads urls from redis queue (myspider:start_urls)."""
    name = 'news'
    redis_key = 'news:start_urls'
    start_urls = ["http://www.bbc.com/news"]

    def parse(self, response):
        pass
    # I add those  ,setname and getname
    def setname(self, name):
        self.name = name

    def getname(self):
        return self.name

def start():
    news_spider = NewsSpider()
    news_spider.setname('test_spider_name')
    print news_spider.getname()
    r = redis.Redis(host='127.0.0.1', port=6379, db=0)
    r.lpush('news:start_urls', 'http://news.sohu.com/')
    process = CrawlerProcess(get_project_settings())
    process.crawl('test_spider_name')
    process.start()  # the script will block here until the crawling is finished

if __name__ == '__main__':
    start()

並且有錯誤:

test_spider_name
2017-05-26 20:14:05 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapybot)
2017-05-26 20:14:05 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'geospider.spiders', 'SPIDER_MODULES': ['geospider.spiders'], 'COOKIES_ENABLED': False, 'SCHEDULER': 'scrapy_redis.scheduler.Scheduler', 'DUPEFILTER_CLASS': 'scrapy_redis.dupefilter.RFPDupeFilter'}
Traceback (most recent call last):
  File "/home/kui/work/python/project/bigcrawler/geospider/control/command.py", line 29, in <module>
    start()
  File "/home/kui/work/python/project/bigcrawler/geospider/control/command.py", line 23, in start
    process.crawl('test_spider_name')
  File "/home/kui/work/python/env/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "/home/kui/work/python/env/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "/home/kui/work/python/env/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "/home/kui/work/python/env/lib/python2.7/site-packages/scrapy/spiderloader.py", line 55, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: test_spider_name'

我知道這是一種愚蠢的方式,我在網上搜索了很長時間,但沒有用。 請幫助我或提出一些想法來實現這一目標。

提前致謝。

這可能會有所幫助:

class NewsSpider(RedisSpider):
    """Spider that reads urls from redis queue (myspider:start_urls)."""
    name = 'news_redis'
    redis_key = 'news:start_urls'
    start_urls = ["http://www.bbc.com/news"]

    def parse(self, response):
        pass

def start():
    news_spider = NewsSpider()

    # Set name & redis_key for NewsSpider
    NewsSpider.name = 'test_spider_name_redis'
    NewsSpider.redis_key = NewsSpider.name + ':start_urls'

    r = redis.Redis(host='127.0.0.1', port=6379, db=0)
    r.lpush(NewsSpider.name + ':start_urls', 'http://news.sohu.com/')
    process = CrawlerProcess(get_project_settings())
    process.crawl(NewsSpider)
    process.start()  # the script will block here until the crawling is finished

if __name__ == '__main__':
    start()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM