简体   繁体   English

我如何使用Selenium Python遍历每个Google搜索页面,但是没有发生

[英]How do I iterate through each google search page using Selenium Python,but its not happening

I am trying to iterate through each page, but below code is not working for me. 我试图遍历每个页面,但是下面的代码对我不起作用。

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
print len(pages)
counter=1
for page in pages:
     counter+=1
     page.click()

Your code will run successfully only for the first time, aka it will click on the 2nd page and then it will throw a Stale Element Reference Exception on this line - 您的代码将仅首次成功运行,也就是单击第二页,然后在此行上引发Stale Element Reference Exception-

page.click()

Now, why is that? 现在,为什么呢? Its because the page WebElement is nothing but a member of the pages list of your elements which you identified before clicking once. 这是因为page WebElement只是单击之前确定的元素pages列表的成员。 Since after clicking the pagination button once, the DOM has changed ,the reference to the element you earlier located no longer holds significance. 由于单击一次分页按钮后,DOM已更改 ,因此对您之前位于的元素的引用不再具有意义。

To solve this, you need to keep finding the pagination button again and again everytime the DOM changes ie everytime you click on the pagination buttons. 为了解决这个问题,您需要每次DOM更改时(即每次单击分页按钮时)一次又一次找到分页按钮。 A simple solution would be to use your counter variable to iterate through your list. 一个简单的解决方案是使用您的counter变量来遍历列表。 Here is the complete code - 这是完整的代码-

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome(executable_path=r'//path to driver')
driver.get("google url")
driver.find_element_by_id("lst-ib").send_keys("search")
driver.find_element_by_id("lst-ib").send_keys(Keys.ENTER)
driver.maximize_window()
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=1
for page in pages:
     pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
     counter+=1
     pages[counter].click()

An alternate (and better) solution would be to identify the pagination buttons by their text - 另一种(更好的)解决方案是通过按钮的文字来识别分页按钮-

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2  #starting from 2
for page in pages:
     driver.find_element_by_xpath("//a[text() = '" + str(counter) + "']").click()
     counter+=1

You could also, try to press the 'Next' button: 您也可以尝试按“下一步”按钮:

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2  #starting from 2
for page in pages:
     driver.find_element_by_xpath("//span[text()='Next']").click()
     counter+=1

EDIT - 编辑-

I fixed your final code. 我修正了您的最终代码。 I renamed some variables so that you don't get confused and replaced your implicit waits with explicit waits . 我重命名了一些变量,以免引起混淆,并用显式等待替换了隐式等待

import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time

class GoogleEveryFirstLink(unittest.TestCase):

    def setUp(self):
        self.driver = webdriver.Chrome(executable_path=r'D:\Test automation\chromedriver.exe')
        self.driver.get("http://www.google.com")

    def test_Hover_Facebook(self):
        driver = self.driver
        self.assertIn("Google",driver.title)
        elem=driver.find_element_by_id("lst-ib")
        elem.clear()
        elem.send_keys("India")
        elem.send_keys(Keys.RETURN)
        page_counter=2
        links_counter=1
        wait = WebDriverWait(driver,20)
        wait.until(EC.element_to_be_clickable((By.XPATH,"(//h3[@class='r']/a)[" + str(links_counter) + "]")))
        pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
        elem1=driver.find_elements_by_xpath("//h3[@class='r']/a")
        print len(elem1)
        print len(pages)
        driver.maximize_window()
        for page in pages:
            for e in elem1:
                my_link = driver.find_element_by_xpath("(//h3[@class='r']/a)[" + str(links_counter) + "]")
                print my_link.text
                my_link.click()
                driver.back()
                links_counter+=1
            my_page = driver.find_element_by_xpath("//a[text() = '" + str(page_counter) + "']")
            my_page.click()
            page_counter+=1

    def tearDown(self):
        self.driver.close()

if __name__=="__main__":
    unittest.main()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 selenium 和 python 迭代和访问列表中的每个链接? 如何点击 Yelp 页面上的每个链接 - How do I iterate and visit each link in a list using selenium and python? How do I click on each link on a Yelp page 我如何使用 Python 和 Selenium 遍历 webelements 列表? - How do i iterate through a webelements list with Python and Selenium? 如何使用 Python 正确遍历 Selenium Webdriver 中的 Web 元素列表? - How do I properly iterate through a list of web elements in Selenium Webdriver using Python? 如何遍历不同的标签名称 (h3) 并使用 selenium 和 python 比较它们的文本? - How do I Iterate through different tag names (h3) and compare their text using selenium and python? 如何使用 Selenium (Python) 进行 Google 搜索,然后在新选项卡中打开第一页的结果? - How can I use Selenium (Python) to do a Google Search and then open the results of the first page in new tabs? 如何使用 Selenium Python 在我的谷歌搜索页面上点击一个项目? - How do I click on an item on my google search page with Selenium Python? 如何遍历 Excel 工作表以在网页 Python Selenium 上执行搜索 - How can I iterate through an excel sheet to perform a search on a webpage Python Selenium 我如何使用python迭代json对象 - how do i iterate through json objects using python Python Selenium-遍历搜索结果 - Python Selenium - iterate through search results 如何使用 Selenium 和 Python 从谷歌搜索的“人们也问”部分中删除文本 - How do I scrap text from "People also ask" section from google search using Selenium and Python
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM