[英]Unable to grab some fields from a webpage using requests
I'm trying to fetch the titles and the links of different container from this webpage using requests
module, but I can't find any way to do that.我正在尝试使用requests
模块从该网页获取不同容器的标题和链接,但我找不到任何方法来做到这一点。 I tried to find any hidden API usually shows up in dev tools, but I failed.我试图找到任何隐藏的 API 通常出现在开发工具中,但我失败了。 I've noticed different times that the content which generate dynamically most of the times are available in some script tag.我注意到不同的时间,大多数时候动态生成的内容在某些脚本标签中可用。 However, in this case I could not find the content in there either.但是,在这种情况下,我也无法在其中找到内容。 As a last resort I made use of Selenium to grab them.作为最后的手段,我使用了 Selenium 来抓住它们。
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
link = 'https://www.firmy.cz/kraj-praha?q=prodej+kol'
def get_content(url):
driver.get(url)
for item in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,'.companyDetail'))):
item_link = item.find_element_by_css_selector("h3 > a.companyTitle").get_attribute("href")
item_title = item.find_element_by_css_selector("span.title").text
yield item_link,item_title
if __name__ == '__main__':
with webdriver.Chrome() as driver:
wait = WebDriverWait(driver,10)
for item in get_content(link):
print(item)
The result the script produces are like:脚本产生的结果如下:
('https://www.firmy.cz/detail/12824790-bike-gallery-s-r-o-praha-vokovice.html', 'Bike Gallery s.r.o.')
('https://www.firmy.cz/detail/13162651-bikeprodejna-cz-praha-dolni-chabry.html', 'BIKEPRODEJNA.CZ')
('https://www.firmy.cz/detail/406369-bikestore-cz-praha-podoli.html', 'Bikestore.cz')
('https://www.firmy.cz/detail/12764331-shopbike-cz-praha-ujezd-nad-lesy.html', 'Shopbike.cz')
How can I grab the same result using requests module?如何使用 requests 模块获取相同的结果?
Having analysed the original page source the solution appears to be very simple - you have to append an additional _escaped_fragment_=
URL param to your link.分析了原始页面源后,解决方案似乎非常简单 - 您必须将 append 附加_escaped_fragment_=
URL 参数添加到您的链接。 For example, a simple Python script to get the required content can be as follows:例如,一个简单的 Python 脚本获取所需内容可以如下:
import requests
r = requests.get('https://www.firmy.cz/kraj-praha?q=prodej+kol&_escaped_fragment_=')
print (r.content)
The below Python script mimics your current implementation using requests
and parsing the received response:下面的 Python 脚本使用requests
和解析收到的响应来模拟您当前的实现:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
base = 'https://www.firmy.cz'
link = 'https://www.firmy.cz/kraj-praha?q=prodej+kol&_escaped_fragment_='
def get_info(url):
r = requests.get(url)
soup = BeautifulSoup(r.text,"lxml")
for item in soup.select(".companyDetail"):
item_link = urljoin(base,item.select_one("h3 > a.companyTitle")['href'])
item_title = item.select_one("span.title").get_text(strip=True)
yield item_link,item_title
if __name__ == '__main__':
for item in get_info(link):
print(item)
Prior to executing, make sure that you have installed the required libraries by running the following commands in cmd
:在执行之前,请确保您已通过在cmd
中运行以下命令来安装所需的库:
pip install bs4
pip install html5lib
pip install lxml
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.