簡體   English   中英

如何從加載緩慢的網站中抓取表格數據

[英]How to scrape table data from a website that is slow to load

我正在嘗試從以下網站抓取表格數據: https://fantasyfootball.telegraph.co.uk/premier-league/statscentre/

目標是獲取所有玩家數據並將其存儲在字典中。

我正在使用 BeautifulSoup 並且我能夠從 html 內容中找到表格,但是返回的表格正文是空的。

從閱讀其他帖子我看到這可能與網站加載網站后加載表格數據的方式很慢有關,但我找不到解決問題的方法。

我的代碼如下:

from bs4 import BeautifulSoup
import requests

# Make a GET request to feth the raw HTML content
html_content = requests.get(url).text

# Parse the html content
soup = BeautifulSoup(html_content, "lxml")

# Find the Title Data within the website
player_table = soup.find("table", attrs={"class": "player-profile-content"})

print(player_table)

我得到的結果是這樣的:

<table class="playerrow playlist" id="table-players">
    <thead>
        <tr class="table-head"></tr>
    </thead>
    <tbody></tbody>
</table>

網站上實際的 HTML 代碼相當長,因為它們將大量數據打包到每個<tr>以及隨后的<td>中,所以除非有人問,否則我不會在這里發布。 可以說 header 行中有幾行<td>行,以及正文中的幾個<tr>行。

此腳本將打印所有玩家統計信息(數據通過 Json 從外部 URL 加載):

import ssl
import json
import requests
from urllib3 import poolmanager

# workaround to avoid SSL errors:
class TLSAdapter(requests.adapters.HTTPAdapter):
    def init_poolmanager(self, connections, maxsize, block=False):
        """Create and initialize the urllib3 PoolManager."""
        ctx = ssl.create_default_context()
        ctx.set_ciphers('DEFAULT@SECLEVEL=1')
        self.poolmanager = poolmanager.PoolManager(
                num_pools=connections,
                maxsize=maxsize,
                block=block,
                ssl_version=ssl.PROTOCOL_TLS,
                ssl_context=ctx)

url = 'https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson'

session = requests.session()
session.mount('https://', TLSAdapter())
data = session.get(url).json()

# uncomment this to print all data:
# print(json.dumps(data, indent=4))

for s in data['playerstats']:
    for k, v in s.items():
        print('{:<15} {}'.format(k, v))
    print('-'*80)

印刷:

SUSPENSION      None
WEEKPOINTS      0
TEAMCODE        MCY
SXI             34
PLAYERNAME      de Bruyne, K
FULLCLEAN       -
SUBS            3
TEAMNAME        Man City
MISSEDPEN       0
YELLOWCARD      3
CONCEED         -
INJURY          None
PLAYERFULLNAME  Kevin de Bruyne
RATIO           40.7
PICKED          36
VALUE           5.6
POINTS          228
PARTCLEAN       -
OWNGOAL         0
ASSISTS         30
GOALS           14
REDCARD         0
PENSAVE         -
PLAYERID        3001
POS             MID
--------------------------------------------------------------------------------

...and so on.

一個簡單的解決方案是監控網絡流量並了解數據是如何交換的。 You would see that the data comes from GET call Request URL: https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson It is a beautiful JSON, thus we do not need BeautifulSoup. 只需請求即可完成工作。

import requests
import pandas as pd

URI = 'https://fantasyfootball.telegraph.co.uk/premier-league/json/getstatsjson'
r = requests.get(URI)

data = r.json()
df = pd.DataFrame(data['playerstats'])

print(df.head()) # head show first 5 rows

結果: 在此處輸入圖像描述

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM