简体   繁体   English

Scrapy循环遍历url格式的txt文件

[英]Scrapy loop through txt file of words for url format

lets say i have this spider:假设我有这个蜘蛛:

class ExampleSpider(scrapy.Spider):
    name = 'ExampleSpider'
    start_urls = []

    def parse(self, response):
        for res in response.css('div.example'):
            item = {
                 'example' : res.css(examplehere)
            }
            yield item

Is there a way that i can have starturls = ["examplesite.com/{}/search"] then loop through my text file of words and format it so for example something like: starturls = ["examplesite.com/{}/search".format(i for i in txtfile.txt)] and this way it would scrape through all the urls for the words i have in the text file?有没有办法让我可以让 starturls = ["examplesite.com/{}/search"] 然后循环遍历我的单词文本文件并对其进行格式化,例如: starturls = ["examplesite.com/{}/ search".format(i for i in txtfile.txt)] 这样它会刮掉我在文本文件中的单词的所有 url? Im not sure if this can be done in scrapy please let me know the best way.我不确定这是否可以在scrapy中完成,请让我知道最好的方法。

This question was asked before.这个问题之前有人问过。

Use start_reuqests method:使用start_reuqests方法:

import scrapy


class ExampleSpider(scrapy.Spider):
    name = 'ExampleSpider'

    def start_requests(self):
        with open('spiders/urlFile.txt', 'r') as f:
            for line in f:
                url = f"https://examplesite.com/{line.rstrip()}/search"
                scrapy.Request(url=url)

    def parse(self, response):
        for res in response.css('div.example'):
            item = {
                'example': res.css('examplehere').get()
            }
            yield item

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM