I'm trying to scrape data from this website: https://web.archive.org/web/20130725021041/http://www.usatoday.com/sports/nfl/injuries/
page = requests.get('https://web.archive.org/web/20130725021041/http://www.usatoday.com/sports/nfl/injuries/')
soup = BeautifulSoup(page.text, 'html.parser')
soup.find_all('tbody')
soup.find_all('tbody') returns []. I'm not entirely sure why.
This is the tbody part I'm trying to scrape out:
<tbody><tr class="page"><td>
7/23/2013
</td><td>
Anthony Spencer
</td><td>
Cowboys
</td><td>
DE
</td><td>
Knee
</td><td>
Knee
</td><td>
Out
</td><td>
Is questionable for 9/8 against the NY Giants
</td></tr><tr class="page"><td>
7/22/2013
</td><td>
Tyrone Crawford
</td><td>
Cowboys
</td><td>
DE
</td><td>
Achilles-tendon
</td><td>
Achilles
</td><td>
Out
</td><td>
Is expected to be placed on injured reserve
</td></tr><tr class="page"><td>
7/16/2013
</td><td>
Ryan Broyles
</td><td>
Lions
</td><td>
WR
</td><td>
Knee
</td><td>
Knee
</td><td>
Questionable
</td><td>
Is questionable for 9/8 against Minnesota
</td></tr><tr class="page"><td>
7/2/2013
</td><td>
Jahvid Best
</td><td>
Lions
</td><td>
RB
</td><td>
Concussion
</td><td>
Concussion
</td><td>
Out
</td><td>
Is out indefinitely
</td></tr><tr class="page"><td>
7/2/2013
</td><td>
Jerel Worthy
</td><td>
Packers
</td><td>
DE
</td><td>
Knee
</td><td>
Knee
</td><td>
Out
</td><td>
Is out indefinitely
</td></tr><tr class="page"><td>
7/2/2013
</td><td>
JC Tretter
</td><td>
Packers
</td><td>
TO
</td><td>
Ankle
</td><td>
Ankle
</td><td>
Out
</td><td>
Is out indefinitely
</td></tr><tr class="page"><td>
</td></tr></tbody>
Could someone help me out and let me know why the find_all on tbody returns an empty list? Even when i try tr with class page it returns an empty list.
Seems to be a problem with the html. Switch to using 'lxml' as parser instead of 'html.parser'. I'd also just use pandas to be honest.
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://web.archive.org/web/20130725021041/http://www.usatoday.com/sports/nfl/injuries/')
soup = bs(r.content, 'lxml')
print(len(soup.find_all('tbody')))
or, more simply for table:
import pandas as pd
df = pd.read_html('https://web.archive.org/web/20130725021041/http://www.usatoday.com/sports/nfl/injuries/')[0]
print(df)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.