簡體   English   中英

刮擦多個下一頁

[英]Scrapy multiple next page

我想刮每一頁。 我已經找到了一種使用易碎外殼的方法,但是我不知道我的蜘蛛是否會遍歷每一頁或僅遍歷下一頁。 我不太確定如何實現。

alphabet = string.ascii_uppercase
each_link = '.' +  alphabet 
each_url =  ["https://myanimelist.net/anime.php?letter={0}".format(i) for i in each_link]
#sub_page_of_url = [[str(url)+"&show{0}".format(i) for i in range(50, 2000, 50)] for url in each_url] #start/stop/steps
#full_url =  each_url + sub_page_of_url

class AnimeScraper_Spider(scrapy.Spider):
    name = "Anime"

    def start_requests(self):
        for url in each_url:
            yield scrapy.Request(url=url, callback= self.parse)

    def parse(self, response):
     next_page_url = response.xpath(
        "//div[@class='bgColor1']//a[text()='Next']/@href").extract_first()

     for href in response.css('#content > div.normal_header.clearfix.pt16 > div > div > span > a:nth-child(1)') :
        url = response.urljoin(href.extract())
        yield Request(url, callback = self.parse_anime)
    yield Request(next_page_url, callback=self.parse)

    def parse_anime(self, response):
        for tr_sel in response.css('div.js-categories-seasonal tr ~ tr'):
            return {
            "title" :  tr_sel.css('a[id] strong::text').extract_first().strip(),
            "synopsis" : tr_sel.css("div.pt4::text").extract_first(),
            "type_" : tr_sel.css('td:nth-child(3)::text').extract_first().strip(),
            "episodes" : tr_sel.css('td:nth-child(4)::text').extract_first().strip(), 
            "rating" : tr_sel.css('td:nth-child(5)::text').extract_first().strip()
            }

我認為您正在嘗試的操作太復雜了,它應該很簡單:

  1. 從主頁開始
  2. 識別所有以特定字母開頭的頁面
  3. 對於這些頁面中的每個頁面,獲取所有下一個鏈接並重復

看起來像這樣:

import string

import scrapy
from scrapy import Request

class AnimeSpider(scrapy.Spider):

    name = "Anime"
    start_urls = ['https://myanimelist.net/anime.php']

    def parse(self, response):
        xp = "//div[@id='horiznav_nav']//li/a/@href"
        return (Request(url, callback=self.parse_anime_list_page) for url in response.xpath(xp).extract())

    def parse_anime_list_page(self, response):
        for tr_sel in response.css('div.js-categories-seasonal tr ~ tr'):
            yield {
                "title":  tr_sel.css('a[id] strong::text').extract_first().strip(),
                "synopsis": tr_sel.css("div.pt4::text").extract_first(),
                "type_": tr_sel.css('td:nth-child(3)::text').extract_first().strip(),
                "episodes": tr_sel.css('td:nth-child(4)::text').extract_first().strip(), 
                "rating": tr_sel.css('td:nth-child(5)::text').extract_first().strip(),
            }

        next_urls = response.xpath("//div[@class='spaceit']//a/@href").extract()
        for next_url in next_urls:
            yield Request(response.urljoin(next_url), callback=self.parse_anime_list_page)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM