简体   繁体   English

使用scrapy抓取时的动态起始网址列表

[英]Dynamic start-urls list when crawling with scrapy

class SomewebsiteProductSpider(scrapy.Spider):
    name = "somewebsite"
    allowed_domains = ["somewebsite.com"]


start_urls = [

]

def parse(self, response):
    items = somewebsiteItem()

    title = response.xpath('//h1[@id="title"]/span/text()').extract()
    sale_price = response.xpath('//span[contains(@id,"ourprice") or contains(@id,"saleprice")]/text()').extract()
    category = response.xpath('//a[@class="a-link-normal a-color-tertiary"]/text()').extract()
    availability = response.xpath('//div[@id="availability"]//text()').extract()
    items['product_name'] = ''.join(title).strip()
    items['product_sale_price'] = ''.join(sale_price).strip()
    items['product_category'] = ','.join(map(lambda x: x.strip(), category)).strip()
    items['product_availability'] = ''.join(availability).strip()
    fo = open("C:\\Users\\user1\PycharmProjects\\test.txt", "w")
    fo.write("%s \n%s \n%s" % (items['product_name'], items['product_sale_price'], self.start_urls))
    fo.close()
    print(items)
    yield items

test.py测试文件

process = CrawlerProcess({
            'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
        })

        process.crawl(SomewebsiteProductSpider)
        process.start()

How can I pass a dynamic start_urls list to a "SomewebsiteProductSpiders" object from test.py before launching the crawling process ?在启动爬网过程之前,如何将动态 start_urls 列表传递给 test.py 中的“SomewebsiteProductSpiders”对象? Any help would be appreciated.任何帮助,将不胜感激。 Thank you.谢谢你。

process.crawl accepts optional parameters that are passed to spider's constructor, so you can either populate start_urls from spider's __init__ or use custom start_requests procedure. process.crawl接受传递给蜘蛛构造函数的可选参数,因此您可以从蜘蛛的__init__填充start_urls或使用自定义start_requests过程。 For example例如

test.py测试文件

...
process.crawl(SomewebsiteProductSpider, url_list=[...])

somespider.py一些蜘蛛.py

class SomewebsiteProductSpider(scrapy.Spider):
    ...
    def __init__(self, *args, **kwargs):
        self.start_urls = kwargs.pop('url_list', [])
        super(SomewebsiteProductSpider, *args, **kwargs)

You can avoid additional kwargs parsing from @mizghun's answer simply by passing start_urls as a parameter.只需将 start_urls 作为参数传递,您就可以避免从 @mizghun 的答案中解析额外的 kwargs。

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):
  name = 'quotes'

  def parse(self, response):
    print(response.url)

 process = CrawlerProcess()
 process.crawl(QuotesSpider, start_urls=["http://example.com", "http://example.org"])
 process.start()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM