[英]Extracting href URL with Python Requests
I would like to extract the URL from an xpath using the requests package in python.我想使用 python 中的请求包从 xpath 中提取 URL。 I can get the text but nothing I try gives the URL.
我可以得到文本,但我没有尝试给出 URL。 Can anyone help?
任何人都可以帮忙吗?
ipdb> webpage.xpath(xpath_url + '/text()')
['Text of the URL']
ipdb> webpage.xpath(xpath_url + '/a()')
*** lxml.etree.XPathEvalError: Invalid expression
ipdb> webpage.xpath(xpath_url + '/href()')
*** lxml.etree.XPathEvalError: Invalid expression
ipdb> webpage.xpath(xpath_url + '/url()')
*** lxml.etree.XPathEvalError: Invalid expression
I used this tutorial to get started: http://docs.python-guide.org/en/latest/scenarios/scrape/我使用本教程开始: http : //docs.python-guide.org/en/latest/scenarios/scrape/
It seems like it should be easy, but nothing comes up during my searching.看起来应该很容易,但在我的搜索过程中什么也没出现。
Thank you.谢谢你。
Have you tried webpage.xpath(xpath_url + '/@href')
?你有没有试过
webpage.xpath(xpath_url + '/@href')
?
Here is the full code:这是完整的代码:
from lxml import html
import requests
page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
webpage = html.fromstring(page.content)
webpage.xpath('//a/@href')
The result should be:结果应该是:
[
'http://econpy.pythonanywhere.com/ex/002.html',
'http://econpy.pythonanywhere.com/ex/003.html',
'http://econpy.pythonanywhere.com/ex/004.html',
'http://econpy.pythonanywhere.com/ex/005.html'
]
You would be better served using BeautifulSoup :使用BeautifulSoup会更好地为您服务:
from bs4 import BeautifulSoup
html = requests.get('testurl.com')
soup = BeautifulSoup(html, "lxml") # lxml is just the parser for reading the html
soup.find_all('a href') # this is the line that does what you want
You can print that line, add it to lists, etc. To iterate through it, use:您可以打印该行,将其添加到列表等。要遍历它,请使用:
links = soup.find_all('a href')
for link in links:
print(link)
with the benefits of a context manager:具有上下文管理器的好处:
with requests_html.HTMLSession() as s:
try:
r = s.get('http://econpy.pythonanywhere.com/ex/001.html')
links = r.html.links
for link in links:
print(link)
except:
pass
You can do it easily with selenium.你可以用硒轻松地做到这一点。
link = webpage.find_elemnt_by_xpath(*xpath url to element with link)
url = link.get_attribute('href')
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://www.***.com')
r.html.links
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.