简体   繁体   English

如何使用 Python 从网页中获取数据

[英]How to get data from a webpage using Python

Last year I had written a python script, to store data of COVID-19 cases (active, cured and deaths) from the website .去年我编写了一个 python 脚本,用于存储来自网站的 COVID-19 病例(活跃、治愈和死亡)的数据。 The script was running fine initially but later due to modifications on the page I was just getting the first 2 rows which are the headers now, and nothing else.该脚本最初运行良好,但后来由于页面上的修改,我只得到了前 2 行,即现在的标题,除此之外别无他物。 Earlier I was using pandas.read_html() method, but it's not able to grab all the data.早些时候我使用了 pandas.read_html() 方法,但它无法获取所有数据。 I tried with the following but these are also not helping:我尝试了以下方法,但这些也无济于事:

  1. BeautifulSoup美汤
  2. lxml.html lxml.html

Also tried the code as in here , but still the same issue.也试过这里的代码,但仍然是同样的问题。 Any reasons why the issue and some other steps I could take?出现问题的任何原因以及我可以采取的其他一些步骤?

Here is What I have tried till now:这是我迄今为止尝试过的:

  1. Using pandas使用pandas

url = "https://www.mohfw.gov.in/" url = "https://www.mohfw.gov.in/"

df_list = pd.read_html(url) df_list = pd.read_html(url)

  1. Using lmxl.html使用 lmxl.html

>>> import requests
>>> page = requests.get(url)
>>> import lxml.html as lh
>>> doc = lh.fromstring(page.content)
>>> tbody_elements = doc.xpath('//tbody') # table is under `<tbody>` tag but it's able to get the data
>>> tbody_elements
[] # null here
>>> tr_elements = doc.xpath('//tr')
>>> tr_elements
[<Element tr at 0x7fb3f507d260>, <Element tr at 0x7fb3f507d2b8>, <Element tr at 0x7fb3f507d310>]
>>> len(tr_elements)
3
>>>for i in tr_elements:
...     print("Row - ", r)
...     for row in i:
...             print(row.text_content())
...     r=r+1
... 

Output:输出:

('Row - ', 1) ('行 - ', 1)

COVID-19 INDIA as on : 14 March 2021, 08:00 IST (GMT+5:30) [↑↓ Status change since yesterday] COVID-19 印度,截至 2021 年 3 月 14 日,IST 08:00 (GMT+5:30) [↑↓ 自昨天以来的状态变化]

('Row - ', 2) ('行 - ', 2)

S. No. Name of State / UT Active Cases* Cured/Discharged/Migrated* Deaths** S. 编号 州名/UT 活跃病例* 治愈/出院/迁移* 死亡人数**

('Row - ', 3) ('行 - ', 3)

Total Change since yesterdayChange since yesterday Cumulative Change since yesterday Cumulative Change since yesterday自昨天以来的总变化自昨天以来的变化 自昨天以来的累积变化 自昨天以来的累积变化

  1. Using BeautifulSoup使用BeautifulSoup
>>> from bs4 import BeautifulSoup
>>> url = 'https://www.mohfw.gov.in/'
>>> web_content = requests.get(url).content
>>> soup = BeautifulSoup(web_content, "html.parser")
>>> all_rows = soup.find_all('tr')
>>> all_rows
[<tr><h5>COVID-19 INDIA <span>as on : 15 March 2021, 08:00 IST (GMT+5:30)\t[\u2191\u2193 Status change since yesterday]</span></h5></tr>, <tr class="row1">\n<th rowspan="2" style="width:5%;"><strong>S. No.</strong></th>\n<th rowspan="2" style="width:24%;"><strong>Name of State / UT</strong></th>\n<th colspan="2" style="text-align:center;width:24%;"><strong>Active Cases*</strong></th>\n<th colspan="2" style="text-align:center;width:24%;"><strong>Cured/Discharged/Migrated*</strong></th>\n<th colspan="2" style="text-align:center;width:24%;"><strong>Deaths**</strong></th>\n</tr>, <tr class="row2"><th style="width: 12%;">Total</th><th style="width: 12%;"><span class="mob-hide">Change since yesterday</span><span class="mob-show">Change since<br/> yesterday</span></th>\n<th style="width: 12%;">Cumulative</th><th style="width: 12%;">Change since yesterday</th>\n<th style="width: 12%;">Cumulative</th><th style="width: 12%;">Change since yesterday</th></tr>]
>>> len(all_rows)
3 

In both BeautifulSoup and lmxl.html, I am just getting the first two rows which are actually headers in the table.在 BeautifulSoup 和 lmxl.html 中,我只得到前两行,它们实际上是表中的标题。

在此处输入图片说明

It looks like they've commented out the whole table.看起来他们已经注释掉了整个表。 On my browser the table is not visible either:在我的浏览器上,该表也不可见:

html 源代码

You could use BeautifulSoup to find the comment entry and decode it as more soup, for example:您可以使用 BeautifulSoup 查找评论条目并将其解码为更多汤,例如:

from bs4 import BeautifulSoup, Comment
import requests

url = 'https://www.mohfw.gov.in/'
req = requests.get(url)
soup = BeautifulSoup(req.content, "html.parser")
trs = soup.find_all('tr')
comment = trs[-1].find_next(string=lambda text: isinstance(text, Comment))
table_soup = BeautifulSoup(comment, "html.parser")

for tr in table_soup.find_all('tr'):
    print([td.text for td in tr.find_all('td')])

This would give you output starting:这会给你输出开始:

['1', 'Andaman and Nicobar Islands', '47', '133', '0']
['2', 'Andhra Pradesh', '18159', '19393', '492']
['3', 'Arunachal Pradesh', '387', '153', '3']
['4', 'Assam', '6818', '12888', '48']
['5', 'Bihar', '7549', '14018', '197']
['6', 'Chandigarh', '164', '476', '11']
['7', 'Chhattisgarh', '1260', '3451', '21']
['8', 'Dadra and Nagar Haveli and Daman and Diu', '179', '371', '2']
['9', 'Delhi', '17407', '97693', '3545']
['10', 'Goa', '1272', '1817', '19']

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM