简体   繁体   中英

How to crawl multiple websites in different timing in scrapy

I have multiple websites stored in database with different crawl time like every 5/10 minutes for every websites. I have created spider to crawl and running with cron. It will take all the websites from database and run crawling parallely for all websites. How can I implement to crawl each websites with different timing which is stored in the database? Is there any way to handle this in scrapy?

Have you tried playing around with adding a scheduling component in start_requests?

def start_requests(self):
    while:
        for spid_url in url_db['to_crawl'].find(typ='due'):
            // update url to crawltime
            yield scrapy.Request(...)

        // sleep until next_url_is_due
        // set_crawl_to_due    
        if enough:
            break

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM