Basically, I want to get all the speeches from mit romney from this link
http://mittromneycentral.com/speeches/
I know how to use BeautifulSoup to get all the urls from the link above.
def mywebcrawl(url):
urls = []
htmltext = urllib2.urlopen(url).read()
soup = BeautifulSoup(htmltext)
#print soup
for tag in soup.findAll('a', href = True):
#append url to top level link
tag['href'] = urlparse.urljoin(url,tag['href'])
urls.append(tag['href'])
pprint(urls)
However, for each url, I cannot extract the speech (note I only want the speech only, no irrelevant stuff). I want to build a function that will iterate through the list of urls and extract the speeches. I have used soup.find_all('table')
and soup.find_all('font')
but I cannot get the desired results. They failed to extract the entire speech for most times.
Here's the strategy I used:
<div class="entry-content">
<p>
tags that do not have a class attribute. The other <p>
tags under the <div>
do have a class
attribute. Here is the code for getting the list of speeches and parsing out the speech from a speech's page:
from BeautifulSoup import BeautifulSoup as BS
def get_list_of_speeches(html):
soup = BS(html)
content_div = soup.findAll('div', {"class":"entry-content"})[0]
speech_links = content_div.findAll('a')
speeches = []
for speech in speech_links:
title = speech.text.encode('utf-8')
link = speech['href']
speeches.append( (title, link) )
return speeches
# speeches.htm is http://mittromneycentral.com/speeches/
speech_html = open('speeches.htm').read()
get_list_of_speeches(speech_html):
def get_speech_text(html):
soup = BS(html)
content_div = soup.findAll('div', {"class":"entry-content"})[0]
content = content_div.findAll('p', {"class":None})
speech = ''
for paragraph in content:
speech += paragraph.text.encode('utf-8') + '\n'
return speech
# file1.html is http://mittromneycentral.com/speeches/2006-speeches/092206-values-voters-summit-2006
html = open('file1.htm').read()
print get_speech_text(html)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.