简体   繁体   English

使用Scrapy难以从网页上抓取所需数据

[英]Difficulty scraping needed data from webpage with Scrapy

I am scraping the following webpage, http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth , and I need to get the card name, price, stock, and condition. 我正在抓取以下网页,即http://www.starcitygames.com/catalog/category/Duel%20Decks%20Venser%20vs%20Koth ,我需要获取卡的名称,价格,库存和状态。 Well I got the three of the four working but I am having trouble with the condition. 好吧,我得到了四个工作中的三个,但是我遇到了麻烦。 No matter what I try it either just gives me NULL or the something else that is not right. 不管我尝试什么,它要么只是给我NULL,要么是其他不正确的东西。

Partial HTML code 部分HTML代码

<td class="deckdbbody search_results_7">
<a href="http://www.starcitygames.com/content/cardconditions">NM/M</a>
</td>

SplashSpider.py SplashSpider.py

import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem

# process the csv file so the url + ip address + useragent pairs are the same as defined in the file # returns a list of dictionaries, example:
# [ {'url': 'http://www.starcitygames.com/catalog/category/Rivals%20of%20Ixalan',
#    'ip': 'http://204.152.114.244:8050',
#    'ua': "Mozilla/5.0 (BlackBerry; U; BlackBerry 9320; en-GB) AppleWebKit/534.11"},
#    ...
# ]
def process_csv(csv_file):
    data = []
    reader = csv.reader(csv_file)
    next(reader)
    for fields in reader:
        if fields[0] != "":
            url = fields[0]
        else:
            continue # skip the whole row if the url column is empty
        if fields[1] != "":
            ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
        if fields[2] != "":
            useragent = fields[2]
        data.append({"url": url, "ip": ip, "ua": useragent})
    return data


class MySpider(Spider):
    name = 'splash_spider'  # Name of Spider

    # notice that we don't need to define start_urls
    # just make sure to get all the urls you want to scrape inside start_requests function

    # getting all the url + ip address + useragent pairs then request them
    def start_requests(self):

        # get the file path of the csv file that contains the pairs from the settings.py
        with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
           # requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
            requests = process_csv(csv_file)

        for req in requests:
            # no need to create custom middlewares
            # just pass useragent using the headers param, and pass proxy using the meta param

            yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
                    headers={"User-Agent": req["ua"]},
                    splash_url = req["ip"],
                    )
    # Scraping
    def parse(self, response):
        item = GameItem()
        for game in response.css("tr[class^=deckdbbody]"):
            # Card Name
            item["card_name"] = game.css("a.card_popup::text").extract_first()
            item["condition"] = game.css("a::text").extract_first() #Problem is here

            item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
            item["price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()

            yield item

I think with this selector you are not getting the correct <a> element. 我认为使用此选择器无法获得正确的<a>元素。 Your condition's css says to get the first <a> in tr[class^=deckdbbody] , but condition column is not the first <a> element in tr[class^=deckdbbody] . 你的病情的CSS说,拿到第一<a>tr[class^=deckdbbody]但状态栏是不是第<a>的元素tr[class^=deckdbbody]

In order to select the correct element, you can use xpath contains() to test if it's the desired link. 为了选择正确的元素,可以使用xpath contains()来测试它是否是所需的链接。

>>> response.css("tr[class^=deckdbbody]").xpath(".//a[contains(@href, 'cardconditions')]/text()").extract()
['NM/M', 'PL', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'NM/M', 'PL', 'NM/M', 'NM/M', 'NM/M']

Furthermore, I don't think you need Scrapy Splash to scrape this site, the data seems to be available from scrapy shell command. 此外,我认为您不需要Scrapy Splash即可抓取此站点,该数据似乎可以从scrapy shell命令获得。

Also, worth take a look at https://stackoverflow.com/help/minimal-reproducible-example 另外,值得一看https://stackoverflow.com/help/minimal-reproducible-example

您需要在CSS表达式中指定目标单元格:

item["condition"] = game.css("td[class^=deckdbbody].search_results_7 a::text").get()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM