簡體   English   中英

Scrapy Splash 返回一個空列表

[英]Scrapy Splash is returning an empty list

我現在正在學習 Scrapy Splash 並通過教程構建一個非常基本的蜘蛛。

但是運行蜘蛛scrapy crawl quotes -o results_file.csv它只返回一個空的 csv 文件。

它還說它爬取了0個網站,所以似乎沒有找到網頁?

我用的是Windows 10家

我仔細檢查了 xpath 表達式,並且 splash 正在處理http://localhost:8050/

我知道我在這里遺漏了一些非常簡單的東西......你能幫忙嗎?

我把它放進了settings.py

    DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
    }
SPLASH_URL = 'http://localhost:8050/'   

DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE   = 'scrapy_splash.SplashAwareFSCacheStorage'

這是我正在運行的蜘蛛:

import scrapy
from scrapy_splash import SplashRequest

class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com/js/']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url=url,
                                callback=self.parse,
                                endpoint='render.html')


    def parse(self, response):
        quotes = response.xpath('//*[@class="quote"]')
        for quote in quotes:
            yield {'author': quote.xpath('.//*[@class="author"]/text()').extract_first(),
                   'quote': quote.xpath('.//*[@class="text"]/text()').extract_first()}

這是我得到的 Output:

2022-01-02 15:32:45 [scrapy.utils.log] INFO: Scrapy 2.4.1 started (bot: quotes_spider)
2022-01-02 15:32:45 [scrapy.utils.log] INFO: Versions: lxml 4.7.1.0, libxml2 2.9.12, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 21.7.0, Python 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 21.0.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 36.0.0, Platform Windows-10-10.0.19042-SP0
2022-01-02 15:32:45 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2022-01-02 15:32:45 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'quotes_spider',
 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
 'NEWSPIDER_MODULE': 'quotes_spider.spiders',
 'SPIDER_MODULES': ['quotes_spider.spiders']}
2022-01-02 15:32:45 [scrapy.extensions.telnet] INFO: Telnet Password: 4cc7c27ecac6bf1d
2022-01-02 15:32:45 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2022-01-02 15:32:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy_splash.SplashCookiesMiddleware',
 'scrapy_splash.SplashMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-01-02 15:32:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-01-02 15:32:45 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-01-02 15:32:45 [scrapy.core.engine] INFO: Spider opened
2022-01-02 15:32:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-01-02 15:32:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-01-02 15:32:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/> (referer: None)
2022-01-02 15:32:46 [scrapy.core.engine] INFO: Closing spider (finished)
2022-01-02 15:32:46 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 223,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 2200,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 0.569004,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2022, 1, 2, 14, 32, 46, 398295),
 'log_count/DEBUG': 1,
 'log_count/INFO': 10,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2022, 1, 2, 14, 32, 45, 829291)}
2022-01-02 15:32:46 [scrapy.core.engine] INFO: Spider closed (finished)

它目前對我有用,所以問題可能出在您的設置上,看看我的幫助:


BOT_NAME = 'test_again' #change to yours

SPIDER_MODULES = ['test_again.spiders'] #change to yours
NEWSPIDER_MODULE = 'test_again.spiders' #change to yours
SPLASH_URL = 'http://localhost:8050'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'my-cool-project (http://example.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False


DEFAULT_REQUEST_HEADERS = {
'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36'
}

SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'


暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM