简体   繁体   中英

How to 'pause' a spider in Scrapy?

I'm using Tor (through Privoxy) for a scraping project, and would like to write a Scrapy extension (cf. https://doc.scrapy.org/en/latest/topics/extensions.html ) which requests a new identity (cf. https://stem.torproject.org/faq.html#how-do-i-request-a-new-identity-from-tor ) whenever a certain number of items are scraped.

However, the changing of identity takes some time (a couple of seconds) during which I expect that nothing can be scraped. Therefore, I would like to make the extension 'pause' the spider until the IP change has been completed.

Is this possible? (I have read some solutions about using Cntrl+C and specifying a JOBDIR , but this seems a bit drastic as I only want to pause the spider, and not stop the entire engine).

Crawler engine has pause and unpause methods so you can try something like that:

class SomeExtension(object):

   @classmethod
   def from_crawler(cls, crawler)
       o = cls(...)
       o.crawler = crawler
       return o

   def change_tor(self):
       self.crawler.engine.pause()
       # some python code implements changing logic
       self.crawler.engine.unpause()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM