[英]How does Scrapy find Spider class by its name?
说我有这个蜘蛛:
class SomeSPider(Spider):
name ='spname'
然后,我可以通过创建SomeSpider的新实例来爬行我的蜘蛛,并像这样调用爬行器:
spider= SomeSpider()
crawler = Crawler(settings)
crawler.configure()
crawler.crawl(spider)
....
我可以只使用蜘蛛名做同样的事情吗? 我的意思是“ spname”?
crawler.crawl('spname') ## I give just the spider name here
如何动态创建Spider? 我猜这是负责任的经理在内部执行此操作,因为它可以正常工作:
Scrapy crawl spname
一种解决方案是解析我的Spiders文件夹,获取所有Spiders类并使用name属性对其进行过滤? 但这似乎是一个牵强的解决方案!
预先感谢您的帮助。
请看一下源代码:
# scrapy/commands/crawl.py
class Command(ScrapyCommand):
def run(self, args, opts):
...
# scrapy/spidermanager.py
class SpiderManager(object):
def _load_spiders(self, module):
...
def create(self, spider_name, **spider_kwargs):
...
# scrapy/utils/spider.py
def iter_spider_classes(module):
"""Return an iterator over all spider classes defined in the given module
that can be instantiated (ie. which have name)
"""
...
受到@kev答案的启发,这是一个检查Spider类的函数:
from scrapy.utils.misc import walk_modules
from scrapy.utils.spider import iter_spider_classes
def _load_spiders(module='spiders.SomeSpider'):
for module in walk_modules(module):
for spcls in iter_spider_classes(module):
self._spiders[spcls.name] = spcls
然后您可以实例化:
somespider = self._spiders['spname']()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.