简体   繁体   English

使用漂亮的汤解析html表

[英]parsing html table using beautiful soup

I wrote this code for printing a table as seen in here http://www.medindia.net/drug-price/list.asp 我写了这段代码来打印表格,如http://www.medindia.net/drug-price/list.asp所示。

import mechanize
import urllib2
from bs4 import BeautifulSoup

med="paracetamol"
br=mechanize.Browser()
br.set_handle_robots(False)
res=br.open("http://www.medindia.net/drug-price/")
br.select_form("frmdruginfo_search")
br.form['druginfosearch']=med
br.submit()
url=br.response().geturl()
print url
web_page = urllib2.urlopen(url)
soup = BeautifulSoup(web_page)
tabl=soup.find_all('table')
rows=tabl.find_all('tr')

for tr in rows:
        cols=tr.find_all('td')
        for td in cols:
              text = ''.join(td.find(text=True))
              print text+"|",

But while I execute the same I get this error 但是当我执行相同的操作时,我得到了这个错误

 rows=tabl.find_all('tr')
    AttributeError: 'list' object has no attribute 'find_all'

Can anyone please help me to solve this? 谁能帮我解决这个问题? Thanks! 谢谢!

soup.find_all('table') returns a list of matched tables, you just need the one - use find() : soup.find_all('table')返回匹配表的列表,您只需要一个-使用find()

tabl = soup.find('table', {'class': 'content-table'})
rows = tabl.find_all('tr')

Also note that I'm explicitly saying that I need a table with a specific class. 还要注意,我明确地说我需要一个带有特定类的表。

Also you don't need to make a separate urllib2 call to the page - just use br.response().read() for getting an actual html for BS to parse. 另外,您无需对页面进行单独的urllib2调用-只需使用br.response().read()获取用于BS解析的实际html。

Just FYI, if you want a better formatted table results on a console, consider using texttable : 仅供参考,如果您想在控制台上获得更好的格式化表格结果,请考虑使用texttable

import mechanize
from bs4 import BeautifulSoup
import texttable


med = raw_input("Enter the drugname: ")
br = mechanize.Browser()
br.set_handle_robots(False)
res = br.open("http://www.medindia.net/drug-price/")
br.select_form("frmdruginfo_search")
br.form['druginfosearch'] = med
br.submit()

soup = BeautifulSoup(br.response().read())

tabl = soup.find('table', {'class': 'content-table'})
table = texttable.Texttable()
for tr in tabl.find_all('tr'):
    table.add_row([td.text.strip() for td in tr.find_all('td')])

print table.draw()

prints: 打印:

+--------------+--------------+--------------+--------------+--------------+
| SNo          | Prescribing  | Total No of  | Single       | Combination  |
|              | Information  | Brands       |     Generic  |     of       |
|              |              | (Single+Comb |              | Generic(s)   |
|              |              | ination)     |              |              |
+--------------+--------------+--------------+--------------+--------------+
| 1            | Abacavir     | 6            | View Price   | -            |
+--------------+--------------+--------------+--------------+--------------+
| 2            | Abciximab    | 1            | View Price   | -            |
+--------------+--------------+--------------+--------------+--------------+
| 3            | Acamprosate  | 3            | View Price   | -            |
+--------------+--------------+--------------+--------------+--------------+
| 4            | Acarbose     | 41           | View Price   | -            |
+--------------+--------------+--------------+--------------+--------------+
...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM