简体   繁体   English

如何导航页面单击下一步按钮的页面?

[英]How I can navigate page a page clicking next button?

I want to navigate into a web with a lof of pages, and I try to click next button to pass to the next page.我想导航到具有大量页面的 web,然后尝试单击下一步按钮以转到下一页。 The web is: https://www.truity.com/search-careers My code is: web 是: https://www.truity.com/search-careers我的代码是:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time

path = 'C:/Users/.../chromedriver'
driver = webdriver.Chrome(path)
driver.get("https://www.truity.com/search-careers")

while True:
    elements = driver.find_elements_by_xpath('//*[@id="block-system-main"]/div/div[3]/div/table/tbody//a')

    links = []
    for i in range(len(elements)):
        links.append(elements[i].get_attribute('href'))

    for link in links:
        print('navigating to: ' + link)
        driver.get(link)
        # Title
        title.append(driver.title)
        #....

        driver.back()
        
    try:
        driver.find_element_by_xpath('//*[@id="block-system-main"]/div/div[4]/ul/li[11]/a').click()
    except NoSuchElementException:
        break

But my code is not correct.但我的代码不正确。 Can you help me?你能帮助我吗? Thanks!谢谢!

This works for me:这对我有用:

driver.find_element_by_xpath("//*[@id='block-system-main']/div/div[4]/ul/li[11]/a").click()

Alternative:选择:

for i in range(1000):
    try:
        driver.find_element_by_xpath(f"//a[@title='Go to page {i+1}']").click()
    except:
        print('No more pages')
        break

Use driver.find_element_by_link_text()使用driver.find_element_by_link_text()

page_num = 1

while True:

    #insert the code to scrape this page
    #.....
    #.....

    print(f'On page {page_num}')
    
    
    #moving to next page
    page_num+=1
    try:
        driver.find_element_by_link_text(str(page_num)).click()
    except NoSuchElementException:
        print('End of pages')
        break
    time.sleep(3)

Output Output

在此处输入图像描述

To simply loop through all the links and the 17 pages.简单地遍历所有链接和 17 页。 Wait for all a href elements to be there and grab their href value.等待所有 a href 元素都在那里并获取它们的 href 值。 Add a time.sleep() incase of stale element error and also wait for the next tag to be clickable.添加一个 time.sleep() 以防陈旧元素错误,并等待下一个标签可点击。 Driver.back() would be an extra step since you just need all the href values and driver.get() to them. Driver.back() 将是一个额外的步骤,因为您只需要它们的所有 href 值和 driver.get() 。

wait = WebDriverWait(driver, 5)
driver.get("https://www.truity.com/search-careers")
title=[]
links=[]
while True:
    elements = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//*[@id='block-system-main']/div/div[3]/div/table/tbody//a")))
    for elem in elements:
        links.append(elem.get_attribute('href'))
    try:
        wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"li.next > a"))).click()
    except (NoSuchElementException,TimeoutException) as e:
        break
    time.sleep(1)
    
for link in links:
    print('navigating to: ' + link)
    driver.get(link)
    # Title
    title.append(driver.title)
    #....

Imports进口

from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait 
from selenium.webdriver.support import expected_conditions as EC
from time import sleep

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM