简体   繁体   中英

give names to start_urls in scrapy

I am crawling urls from a csv file, and each url has a name. How can I download these urls and save them with their names?

reader = csv.reader(open("source1.csv"))
for Name,Sources1 in reader:
    urls.append(Sources1)

class Spider(scrapy.Spider):
    name = "test"
    start_urls = urls[1:]

    def parse(self, response):
        filename = **Name** + '.pdf' //how can I get the names I read from the csv file?

Perhaps you want to override the start_requests() method instead of using start_urls?

Example:

class MySpider(scrapy.Spider):
    name = 'test'

    def start_requests(self):
        data = read_csv()
        for d in data:
            yield scrapy.Request(d.url, meta={'name': d.name})

The meta dict for request will be repassed to the response, so you can later do:

def parse(self, response):
    name = response.meta.get('name')
    ...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM