简体   繁体   中英

Dynamic start-urls list when crawling with scrapy

class SomewebsiteProductSpider(scrapy.Spider):
    name = "somewebsite"
    allowed_domains = ["somewebsite.com"]


start_urls = [

]

def parse(self, response):
    items = somewebsiteItem()

    title = response.xpath('//h1[@id="title"]/span/text()').extract()
    sale_price = response.xpath('//span[contains(@id,"ourprice") or contains(@id,"saleprice")]/text()').extract()
    category = response.xpath('//a[@class="a-link-normal a-color-tertiary"]/text()').extract()
    availability = response.xpath('//div[@id="availability"]//text()').extract()
    items['product_name'] = ''.join(title).strip()
    items['product_sale_price'] = ''.join(sale_price).strip()
    items['product_category'] = ','.join(map(lambda x: x.strip(), category)).strip()
    items['product_availability'] = ''.join(availability).strip()
    fo = open("C:\\Users\\user1\PycharmProjects\\test.txt", "w")
    fo.write("%s \n%s \n%s" % (items['product_name'], items['product_sale_price'], self.start_urls))
    fo.close()
    print(items)
    yield items

test.py

process = CrawlerProcess({
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
        })

        process.crawl(SomewebsiteProductSpider)
        process.start()

How can I pass a dynamic start_urls list to a "SomewebsiteProductSpiders" object from test.py before launching the crawling process ? Any help would be appreciated. Thank you.

process.crawl accepts optional parameters that are passed to spider's constructor, so you can either populate start_urls from spider's __init__ or use custom start_requests procedure. For example

test.py

...
process.crawl(SomewebsiteProductSpider, url_list=[...])

somespider.py

class SomewebsiteProductSpider(scrapy.Spider):
    ...
    def __init__(self, *args, **kwargs):
        self.start_urls = kwargs.pop('url_list', [])
        super(SomewebsiteProductSpider, *args, **kwargs)

You can avoid additional kwargs parsing from @mizghun's answer simply by passing start_urls as a parameter.

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):
  name = 'quotes'

  def parse(self, response):
    print(response.url)

 process = CrawlerProcess()
 process.crawl(QuotesSpider, start_urls=["http://example.com", "http://example.org"])
 process.start()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM