繁体   English   中英

如何导航页面单击下一步按钮的页面?

[英]How I can navigate page a page clicking next button?

我想导航到具有大量页面的 web,然后尝试单击下一步按钮以转到下一页。 web 是: https://www.truity.com/search-careers我的代码是:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time

path = 'C:/Users/.../chromedriver'
driver = webdriver.Chrome(path)
driver.get("https://www.truity.com/search-careers")

while True:
    elements = driver.find_elements_by_xpath('//*[@id="block-system-main"]/div/div[3]/div/table/tbody//a')

    links = []
    for i in range(len(elements)):
        links.append(elements[i].get_attribute('href'))

    for link in links:
        print('navigating to: ' + link)
        driver.get(link)
        # Title
        title.append(driver.title)
        #....

        driver.back()
        
    try:
        driver.find_element_by_xpath('//*[@id="block-system-main"]/div/div[4]/ul/li[11]/a').click()
    except NoSuchElementException:
        break

但我的代码不正确。 你能帮助我吗? 谢谢!

这对我有用:

driver.find_element_by_xpath("//*[@id='block-system-main']/div/div[4]/ul/li[11]/a").click()

选择:

for i in range(1000):
    try:
        driver.find_element_by_xpath(f"//a[@title='Go to page {i+1}']").click()
    except:
        print('No more pages')
        break

使用driver.find_element_by_link_text()

page_num = 1

while True:

    #insert the code to scrape this page
    #.....
    #.....

    print(f'On page {page_num}')
    
    
    #moving to next page
    page_num+=1
    try:
        driver.find_element_by_link_text(str(page_num)).click()
    except NoSuchElementException:
        print('End of pages')
        break
    time.sleep(3)

Output

在此处输入图像描述

简单地遍历所有链接和 17 页。 等待所有 a href 元素都在那里并获取它们的 href 值。 添加一个 time.sleep() 以防陈旧元素错误,并等待下一个标签可点击。 Driver.back() 将是一个额外的步骤,因为您只需要它们的所有 href 值和 driver.get() 。

wait = WebDriverWait(driver, 5)
driver.get("https://www.truity.com/search-careers")
title=[]
links=[]
while True:
    elements = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//*[@id='block-system-main']/div/div[3]/div/table/tbody//a")))
    for elem in elements:
        links.append(elem.get_attribute('href'))
    try:
        wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.next > a"))).click()
    except (NoSuchElementException,TimeoutException) as e:
        break
    time.sleep(1)
    
for link in links:
    print('navigating to: ' + link)
    driver.get(link)
    # Title
    title.append(driver.title)
    #....

进口

from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait 
from selenium.webdriver.support import expected_conditions as EC
from time import sleep

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM