簡體   English   中英

如果按鈕的 href 是 javascript:void(0),我如何使用 Scrapy 和 Splash 處理分頁

[英]How can I handle pagination with Scrapy and Splash, if the href of the button is javascript:void(0)

我試圖從這個網站上抓取大學的名稱和鏈接: https ://www.topuniversities.com/university-rankings/world-university-rankings/2021,在處理分頁時遇到了問題,作為href指向下一頁的按鈕是javascript:void(0),所以我無法使用scrapy.Request()或response.follow()到達下一頁,有沒有辦法處理這樣的分頁?

網站截圖

標簽和 href 的屏幕截圖

這個網站的網址不包含參數,如果點擊下一頁按鈕,網址保持不變,所以我無法通過更改網址來處理分頁。

下面的代碼片段只能獲取第一頁和第二頁的大學名稱和鏈接:

import scrapy
from scrapy_splash import SplashRequest


class UniSpider(scrapy.Spider):
    name = 'uni'
    allowed_domains = ['www.topuniversities.com']

    script = """
    function main(splash, args)
      splash:set_user_agent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36")
      splash.private_mode_enabled = false
      assert(splash:go(args.url))
      assert(splash:wait(3))

      return {
        html = splash:html()
      }
    end
    """

    next_page = """
    function main(splash, args)
        splash:set_user_agent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36")
        splash.private_mode_enabled = false
        assert(splash:go(args.url))
        assert(splash:wait(3))

        local btn = assert(splash:jsfunc([[
        function(){
        document.querySelector("#alt-style-pagination a.page-link.next").click()
      }
        ]]))
        assert(splash:wait(2))
        btn()

        splash:set_viewport_full()
        assert(splash:wait(3))

        return {
          html = splash:html()
        }
    end
    """

    def start_requests(self):
        yield SplashRequest(
            url="https://www.topuniversities.com/university-rankings/world-university-rankings/2021",
            callback=self.parse, endpoint="execute",
            args={"lua_source": self.script})

    def parse(self, response):
        for uni in response.css("a.uni-link"):
            uni_link = response.urljoin(uni.css("::attr(href)").get())
            yield {
                "name": uni.css("::text").get(),
                "link": uni_link
            }

        yield SplashRequest(
            url=response.url,
            callback=self.parse, endpoint="execute",
            args={"lua_source": self.next_page}
        )

這個簡單的網站不需要飛濺。

嘗試加載以下鏈接:

https://www.topuniversities.com/sites/default/files/qs-rankings-data/en/2057712.txt

這有所有的大學,網站只加載這個文件/json一次,然后用分頁顯示信息。

這是短代碼(不使用scrapy):

from requests import get
from json import loads, dumps
from lxml.html import fromstring

url = "https://www.topuniversities.com/sites/default/files/qs-rankings-data/en/2057712.txt"
html = get(url, stream=True)

## another approach for loading json
# jdata = loads(html.content.decode())

jdata = html.json()
for x in jdata['data']:
    core_id = x['core_id']
    country = x['country']
    city = x['city']
    guide = x['guide']
    nid = x['nid']
    title = x['title']
    logo = x['logo']
    score = x['score']
    rank_display = x['rank_display']
    region = x['region']
    stars = x['stars']
    recm = x['recm']
    dagger = x['dagger']

    ## convert title to text
    soup = fromstring(title)
    title = soup.xpath(".//a/text()")[0]

    print ( title )

上面的代碼打印了各個大學的“標題”,嘗試將其與其他可用列一起保存在 CSV/Excel 文件中。 結果如下:

Massachusetts Institute of Technology (MIT) 
Stanford University
Harvard University
California Institute of Technology (Caltech)
University of Oxford
ETH Zurich - Swiss Federal Institute of Technology
University of Cambridge
Imperial College London

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM