简体   繁体   中英

differences between scrapy.crawler and scrapy.spider?

I am new to Scrapy and quite confused about crawler and spider. It seems that both of them can crawl the website and parse items.

There are a Crawler class(/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py) and a CrawlerSpider class(/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py) in Scrapy. Does anyone could tell me the differences between them? And which one should I use in what conditions?

Thanks a lot in advance!

CrawlerSpider is a sub-class of BaseSpider : This is the calls you need to extend if you want your spider to follow links according to the "Rule" list. "Crawler" is the main crawler sub-classed by CrawlerProcess. You will have to sub-class CrawlerSpider in you spider but I don't think you will have to touch Crawler.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM