繁体   English   中英

beautifulsoup和无效的HTML文档

[英]beautifulsoup and invalid html document

我正在尝试解析文档http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm 我想在文档的开头提供国家/地区和名称。

这是我的代码

import urllib
import re
from bs4 import BeautifulSoup
url="http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm"
soup=BeautifulSoup(urllib.urlopen(url))
attendances_table=soup.find("table", {"width":850})
print attendances_table #this works, I see the whole table
print attendances_table.find_all("tr")

我收到以下错误:

AttributeError: 'NoneType' object has no attribute 'next_element'

然后我尝试使用与此帖相同的解决方案(我知道,再次,我:p): 带有无效html文档的beautifulsoup

我换了一行:

soup=BeautifulSoup(urllib.urlopen(url))

有:

return BeautifulSoup(html, 'html.parser')

如果我这样做:

print attendances_table

我只得到:

<table border="0" cellpadding="10" cellspacing="0" width="850">
<tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b></p></td></tr></table>

我应该改变什么?

使用html5lib作为解析器,它非常宽松:

soup = BeautifulSoup(urllib.urlopen(url), 'html5lib')

您还需要先安装html5lib模块。

演示:

>>> from bs4 import BeautifulSoup
>>> import urllib
>>> url = "http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm"
>>> soup = BeautifulSoup(urllib.urlopen(url), 'html5lib')
>>> attendances_table = soup.find("table", {"width": 850})
>>> print attendances_table
<table border="0" cellpadding="10" cellspacing="0" width="850">
<tbody><tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b>:</p>
<p>Mr Philippe MAYSTADT</p></td>
<td valign="TOP" width="58%">
<p>Deputy Prime Minister, Minister for Finance and Foreign Trade</p></td>
</tr>
...
<tr><td valign="TOP" width="42%">
<b><u></u></b><u></u><p><u><b>Portugal</b></u>:</p>
<p>Mr António de SOUSA FRANCO</p>
<p>Mr Fernando TEIXEIRA dos SANTOS</p></td>
<td valign="TOP" width="58%">
<p>Minister for Finance</p>
<p>State Secretary for the Treasury and Finance</p></td>
</tr>
</tbody></table>

使find_all('tr')工作的解决方法:

>>> attendances_table = BeautifulSoup(str(attendances_table), 'html5lib')
>>> print attendances_table.find_all("tr")
[<tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b>:</p>
<p>Mr Philippe MAYSTADT</p></td>
...
<tr><td valign="TOP" width="42%">
<b><u></u></b><u></u><p><u><b>Portugal</b></u>:</p>
<p>Mr António de SOUSA FRANCO</p>
<p>Mr Fernando TEIXEIRA dos SANTOS</p></td>
<td valign="TOP" width="58%">
<p>Minister for Finance</p>
<p>State Secretary for the Treasury and Finance</p></td>
</tr>]

解决了!

我刚刚使用了另一个解析器库lxml 谢谢Martijn Pieters!

soup = BeautifulSoup(urllib.urlopen(url), 'lxml')

lxml是唯一适合我的库!

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM