简体   繁体   English

从bs4制作pandas DataFrame时如何跳过列?

[英]How to skip columns when making a pandas DataFrame from bs4?

I am trying to scrape a table off of a website using Python and BeautifulSoup4.我正在尝试使用 Python 和 BeautifulSoup4 从网站上刮一张桌子。 I then want to output the table, but I want to skip the first 5 columns of the table.然后我想输出表格,但我想跳过表格的前 5 列。 Here is my code这是我的代码

def scrape_data():
    url1 = "https://basketball-reference.com/leagues/NBA_2020_advanced.html"
    html1 = urlopen(url1)
    soup1 = bs(html1, 'html.parser')
    soup1.findAll('tr', limit = 2)
    headers1 = [th.getText() for th in soup1.findAll('tr', limit = 2)[0].findAll('th')]
    headers1 = headers1[5:]
    rows1 = soup1.findAll('tr')[1:]
    player_stats = [[td.getText() for td in rows1[i].findAll('td')]for i in range(len(rows1))]
    stats1 = pd.DataFrame(player_stats, columns=headers1)
    return stats1

And the error I get is ValueError: 24 columns passed, passed data had 28 columns我得到的错误是ValueError: 24 columns passed, passed data had 28 columns

I know the error is coming from stats1 = pd.DataFrame(player_stats, columns=headers1)我知道错误来自stats1 = pd.DataFrame(player_stats, columns=headers1)

But how do I fix it?但是我该如何解决呢?

Just use iloc on the resulting dataframe.只需在结果数据帧上使用iloc Note that read_html returns a list of dataframes, although there is only one per this url.请注意, read_html返回一个数据帧列表,尽管每个 url 只有一个。 You need to access this single dataframe via pd.read_html(url)[0] .您需要通过pd.read_html(url)[0]访问此单个数据pd.read_html(url)[0] Then just use iloc to ignore the first five columns.然后只需使用iloc忽略前五列。

url = "https://basketball-reference.com/leagues/NBA_2020_advanced.html"
df = pd.read_html(url)[0].iloc[:, 5:]

I solved it thanks to some help from @JonClements.感谢@JonClements 的帮助,我解决了这个问题。 My working code is我的工作代码是

def scrape_data():
    url1 = "https://basketball-reference.com/leagues/NBA_2020_advanced.html"
    html1 = urlopen(url1)
    soup1 = bs(html1, 'html.parser')
    soup1.findAll('tr', limit = 2)
    headers1 = [th.getText() for th in soup1.findAll('tr', limit = 2)[0].findAll('th')]
    headers1 = headers1[5:]
    rows1 = soup1.findAll('tr')[1:]
    player_stats = [[td.getText() for td in rows1[i].findAll('td')[4:]]for i in range(len(rows1))]
    stats1 = pd.DataFrame(player_stats, columns=headers1)
    return stats1

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM