繁体   English   中英

scrapy start_urls 来自 txt 文件

[英]scrapy start_urls from txt file

我有大约 100K 的网址要刮,所以我想从 txt 文件中读取它们这里是代码

import scrapy
from scrapy import Request
from scrapy.crawler import CrawlerProcess

class ConadstoresSpider(scrapy.Spider):
    name = 'conadstores'
    headers = {'user_agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"}
    allowed_domains = ['conad.it']
    #start_urls = ['http://www.conad.it/ricerca-negozi/negozio.002781.html','https://www.conad.it/ricerca-negozi/negozio.006804.html']
    #start_urls = [l.strip() for l in open("/Users/macbook/PycharmProjects/conad/conad/conadlinks.txt").readlines()]
    #f = open("/Users/macbook/PycharmProjects/conad/conad/conadlinks.txt")
    #start_urls = [url.strip() for url in f.readlines()]
    #f.close()

    with open('/Users/macbook/PycharmProjects/conad/conad/conadlinks.txt') as file:
        start_urls = [line.strip() for line in file]


    def start_request(self):
        request = Request(url = self.start_urls, callback=self.parse)
        yield request

    def parse(self, response):
        yield {
            'address' : response.css('.address-oswald::text').extract(),
            'phone' : response.css('span.phone::text').extract(),

        }

但我不断收到此错误

2021-12-08 13:27:48 [scrapy.core.engine] 错误:获取启动请求时出错 Traceback(最近一次调用最后一次):文件“/Users/macbook/PycharmProjects/conad/venv/lib/python3.9 /site-packages/scrapy/core/engine.py”,第 127 行,在 _next_request request = next(slot.start_requests) 文件“/Users/macbook/PycharmProjects/conad/conad/conad/middlewares.py”,第 52 行,在 start_requests 中 r 的 process_start_requests 中:文件“ /Users/macbook/PycharmProjects/conad/venv/lib/python3.9/site-packages/scrapy/spiders/init .py”,第 83 行,在 start_requests yield Request(url, dont_filter =True)文件“ /Users/macbook/PycharmProjects/conad/venv/lib/python3.9/site-packages/scrapy/http/request/init .py”,第 25 行,在init self._set_url(url) 文件中“ /Users/macbook/PycharmProjects/conad/venv/lib/python3.9/site-packages/scrapy/http/request/init .py",第 62 行,在 _set_url 中引发 ValueError('请求 url 中缺少方案:%s' % self._url) ValueError: 错误请求 url 中的唱歌方案:%7B%5Crtf1%5Cansi%5Cansicpg1252%5Ccocoartf2580

任何想法? 谢谢!

我们可以在蜘蛛的 start_requests() 方法中覆盖 start_urls 逻辑

这是提取数据的简单方法

import scrapy


class ConadstoresSpider(scrapy.Spider):
    name = 'conadstores'

    def start_requests(self):
        # read file data (you can use different logic for extract URLS from text files)
        a_file = open("/Users/macbook/PycharmProjects/conad/conad/conadlinks.txt")
        file_contents = a_file.read()
        contents_split = file_contents.splitlines()
        # extract urls from text file and store in list
        for url in contents_split:
            # send request to extracted URL.
            yield scrapy.Request(url)

    def parse(self, response, **kwargs):
        yield {
            'address': response.css('.address-oswald::text').extract(),
            'phone': response.css('span.phone::text').extract(),

        }

您可以使用不同的文件读取逻辑,但请确保它返回 url 列表。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM