简体   繁体   中英

Python, Limiting search at a specific hyperlink on webpage

I am finding a way to download .pdf file through hyperlinks on a webpage.

Learned from How can i grab pdf links from website with Python script , the way is:

import lxml.html, urllib2, urlparse

base_url = 'http://www.renderx.com/demos/examples.html'
res = urllib2.urlopen(base_url)
tree = lxml.html.fromstring(res.read())

ns = {'re': 'http://exslt.org/regular-expressions'}

for node in tree.xpath('//a[re:test(@href, "\.pdf$", "i")]', namespaces=ns):
    print urlparse.urljoin(base_url, node.attrib['href'])

The question is, how can I only find the .pdf under a specific hyperlink, instead of listing all the .pdf(s) on the webpage?

A way is, I can limit the print when it contains certain words like:

If ‘CA-Personal.pdf’ in node:

But what if the .pdf file name is changing? Or I just want to limit the searching on the webpage, at the hyperlink of “Applications”? thanks.

well, not the best way but no harm to do:

from bs4 import BeautifulSoup
import urllib2

domain = 'http://www.renderx.com'    
url = 'http://www.renderx.com/demos/examples.html'

page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
app = soup.find_all('a', text = "Applications")

for aa in app:
    print domain + aa['href']

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM