簡體   English   中英

Scrapy Splash 總是返回相同的頁面

[英]Scrapy Splash is always returning the same page

對於預先知道個人資料網址的幾個 Disqus 用戶中的每一個,我想抓取他們的姓名和關注者的用戶名。 我正在使用scrapysplash這樣做。 但是,當我解析響應時,它似乎總是在抓取第一個用戶的頁面。 我嘗試將wait設置為10並將dont_filter設置為True ,但它不起作用。 我現在該怎么辦?

這是我的蜘蛛:

import scrapy
from disqus.items import DisqusItem

class DisqusSpider(scrapy.Spider):
    name = "disqusSpider"
    start_urls = ["https://disqus.com/by/disqus_sAggacVY39/", "https://disqus.com/by/VladimirUlayanov/", "https://disqus.com/by/Beasleyhillman/", "https://disqus.com/by/Slick312/"]
    splash_def = {"endpoint" : "render.html", "args" : {"wait" : 10}}

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url = url, callback = self.parse_basic, dont_filter = True, meta = {
                "splash" : self.splash_def,
                "base_profile_url" : url
            })

    def parse_basic(self, response):
        name = response.css("h1.cover-profile-name.text-largest.truncate-line::text").extract_first()
        disqusItem = DisqusItem(name = name)
        request = scrapy.Request(url = response.meta["base_profile_url"] + "followers/", callback = self.parse_followers, dont_filter = True, meta = {
            "item" : disqusItem,
            "base_profile_url" : response.meta["base_profile_url"],
            "splash": self.splash_def
        })
        print "parse_basic", response.url, request.url
        yield request

    def parse_followers(self, response):
        print "parse_followers", response.meta["base_profile_url"], response.meta["item"]
        followers = response.css("div.user-info a::attr(href)").extract()

DisqusItem定義如下:

class DisqusItem(scrapy.Item):
    name = scrapy.Field()
    followers = scrapy.Field()

結果如下:

2017-08-07 23:09:12 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None)
parse_followers https://disqus.com/by/disqus_sAggacVY39/ {'name': u'Trailer Trash'}
2017-08-07 23:09:14 [scrapy.extensions.logstats] INFO: Crawled 5 pages (at 5 pages/min), scraped 0 items (at 0 items/min)
2017-08-07 23:09:18 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None)
parse_followers https://disqus.com/by/VladimirUlayanov/ {'name': u'Trailer Trash'}
2017-08-07 23:09:27 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None)
parse_followers https://disqus.com/by/Beasleyhillman/ {'name': u'Trailer Trash'}
2017-08-07 23:09:40 [scrapy.core.engine] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None)
parse_followers https://disqus.com/by/Slick312/ {'name': u'Trailer Trash'}

這是文件settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for disqus project
#

BOT_NAME = 'disqus'

SPIDER_MODULES = ['disqus.spiders']
NEWSPIDER_MODULE = 'disqus.spiders'

ROBOTSTXT_OBEY = False

SPLASH_URL = 'http://localhost:8050' 

DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

DUPEFILTER_CLASS = 'scrapyjs.SplashAwareDupeFilter'
DUPEFILTER_DEBUG = True

DOWNLOAD_DELAY = 10

我能夠使用 SplashRequest 而不是 scrapy.Request 讓它工作。

例如:

import scrapy
from disqus.items import DisqusItem
from scrapy_splash import SplashRequest


class DisqusSpider(scrapy.Spider):
    name = "disqusSpider"
    start_urls = ["https://disqus.com/by/disqus_sAggacVY39/", "https://disqus.com/by/VladimirUlayanov/", "https://disqus.com/by/Beasleyhillman/", "https://disqus.com/by/Slick312/"]

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse_basic, dont_filter = True, endpoint='render.json',
                        args={
                            'wait': 2,
                            'html': 1
                        })

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM