[英]BeautifulSoup: how to get all article links from this link?
我想从“https://www.cnnindonesia.com/search?query=covid”获取所有文章链接这是我的代码:
links = []
base_url = requests.get(f"https://www.cnnindonesia.com/search?query=covid")
soup = bs(base_url.text, 'html.parser')
cont = soup.find_all('div', class_='container')
for l in cont:
l_cont = l.find_all('div', class_='l_content')
for bf in l_cont:
bf_cont = bf.find_all('div', class_='box feed')
for lm in bf_cont:
lm_cont = lm.find('div', class_='list media_rows middle')
for article in lm_cont.find_all('article'):
a_cont = article.find('a', href=True)
if url:
link = a['href']
links.append(link)
结果如下:
links
[]
每篇文章都有这样的结构:
<article class="col_4">
<a href="https://www.cnnindonesia.com/...">
<span>...</span>
<h2 class="title">...</h2>
</a>
</article>
简单遍历文章内容,然后找一个元素。
尝试:
from bs4 import BeautifulSoup
import requests
links = []
response = requests.get(f"https://www.cnnindonesia.com/search?query=covid")
soup = BeautifulSoup(response.text, 'html.parser')
for article in soup.find_all('article'):
url = article.find('a', href=True)
if url:
link = url['href']
print(link)
links.append(link)
print(links)
输出:
https://www.cnnindonesia.com/nasional/...pola-sawah-di-laut-natuna-utara
...
['https://www.cnnindonesia.com/nasional/...pola-sawah-di-laut-natuna-utara', ...
'https://www.cnnindonesia.com/gaya-hidup/...ikut-penerbangan-gravitasi-nol']
更新:
如果想要提取由 JavaScript 在<div class="list media_rows middle">
元素中动态添加的 URL,那么您必须使用像Selenium这样的东西,它可以在 Web 浏览器中呈现整个页面后提取内容。
from selenium import webdriver
from selenium.webdriver.common.by import By
url = 'https://www.cnnindonesia.com/search?query=covid'
links = []
options = webdriver.ChromeOptions()
pathToChromeDriver = "chromedriver.exe"
browser = webdriver.Chrome(executable_path=pathToChromeDriver,
options=options)
try:
browser.get(url)
browser.implicitly_wait(10)
html = browser.page_source
content = browser.find_element(By.CLASS_NAME, 'media_rows')
for elt in content.find_elements(By.TAG_NAME, 'article'):
link = elt.find_element(By.TAG_NAME, 'a')
href = link.get_attribute('href')
if href:
print(href)
links.append(href)
finally:
browser.quit()
抱歉,没有足够的声望,无法添加评论。
我认为这一行: for url in lm_row_cont.find_all('a'):
应该将 a 标记设为'<a>'
或者,您可以在抓取 div 后使用正则表达式(跳过上述内容)来匹配相关项目。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.