[英]Scraping Headlines From News Website Homepages Using BeautifulSoup in Python
[英]scraping news website aggregator by clicking on more news button using selenium
我想从这个链接中抓取新闻头条: https://www.newsnow.co.uk/h/Business+&+Finance?type=ln
我想通过单击(使用 selenium) “查看更多标题”按钮来扩展新闻,以收集尽可能多的新闻标题
我创建了这段代码,但未能点击展开新闻:
import time
from selenium import webdriver
u = 'https://www.newsnow.co.uk/h/Business+&+Finance?type=ln'
driver = webdriver.Chrome(executable_path=r"C:\chromedriver.exe")
driver.get(u)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
driver.implicitly_wait(60) # seconds
elem = driver.find_element_by_css_selector('span:contains("view more headlines")')
for i in range(10):
elem.click()
time.sleep(5)
print(f'click {i} done')
返回: selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
我尝试使用 xpath 选择器:
elem = driver.find_element_by_xpath('//[@id="nn_container"]/div[2]/main/div[2]/div/div/div[3]/div/a')
返回: selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <a class="rs-button-more js-button-more btn--primary btn--primary--no-spacing" href="#">...</a> is not clickable at point (353, 551). Other element would receive the click: <div class="alerts-scroller">...</div>
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <a class="rs-button-more js-button-more btn--primary btn--primary--no-spacing" href="#">...</a> is not clickable at point (353, 551). Other element would receive the click: <div class="alerts-scroller">...</div>
点击按钮在点击后被覆盖元素覆盖。 因此,我们在第一次点击后使用javascript
来获取它。 这是工作程序。
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
u = 'https://www.newsnow.co.uk/h/Business+&+Finance?type=ln'
driver = webdriver.Chrome(executable_path=r"C:\bin\chromedriver.exe")
driver.maximize_window()
driver.get(u)
time.sleep(10)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight)")
for i in range(10):
element =WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CLASS_NAME,'btn--primary__label')))
driver.execute_script("arguments[0].scrollIntoView();", element)
element.click()
time.sleep(5)
print(f'click {i} done')
这个是正确的XPath:
driver.find_element_by_xpath(r'//*[@id="nn_container"]/div[2]/main/div[2]/div/div/div[3]/div/a').click()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.