简体   繁体   中英

How do I iterate through each google search page using Selenium Python,but its not happening

I am trying to iterate through each page, but below code is not working for me.

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
print len(pages)
counter=1
for page in pages:
     counter+=1
     page.click()

Your code will run successfully only for the first time, aka it will click on the 2nd page and then it will throw a Stale Element Reference Exception on this line -

page.click()

Now, why is that? Its because the page WebElement is nothing but a member of the pages list of your elements which you identified before clicking once. Since after clicking the pagination button once, the DOM has changed ,the reference to the element you earlier located no longer holds significance.

To solve this, you need to keep finding the pagination button again and again everytime the DOM changes ie everytime you click on the pagination buttons. A simple solution would be to use your counter variable to iterate through your list. Here is the complete code -

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome(executable_path=r'//path to driver')
driver.get("google url")
driver.find_element_by_id("lst-ib").send_keys("search")
driver.find_element_by_id("lst-ib").send_keys(Keys.ENTER)
driver.maximize_window()
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=1
for page in pages:
     pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
     counter+=1
     pages[counter].click()

An alternate (and better) solution would be to identify the pagination buttons by their text -

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2  #starting from 2
for page in pages:
     driver.find_element_by_xpath("//a[text() = '" + str(counter) + "']").click()
     counter+=1

You could also, try to press the 'Next' button:

pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2  #starting from 2
for page in pages:
     driver.find_element_by_xpath("//span[text()='Next']").click()
     counter+=1

EDIT -

I fixed your final code. I renamed some variables so that you don't get confused and replaced your implicit waits with explicit waits .

import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time

class GoogleEveryFirstLink(unittest.TestCase):

    def setUp(self):
        self.driver = webdriver.Chrome(executable_path=r'D:\Test automation\chromedriver.exe')
        self.driver.get("http://www.google.com")

    def test_Hover_Facebook(self):
        driver = self.driver
        self.assertIn("Google",driver.title)
        elem=driver.find_element_by_id("lst-ib")
        elem.clear()
        elem.send_keys("India")
        elem.send_keys(Keys.RETURN)
        page_counter=2
        links_counter=1
        wait = WebDriverWait(driver,20)
        wait.until(EC.element_to_be_clickable((By.XPATH,"(//h3[@class='r']/a)[" + str(links_counter) + "]")))
        pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
        elem1=driver.find_elements_by_xpath("//h3[@class='r']/a")
        print len(elem1)
        print len(pages)
        driver.maximize_window()
        for page in pages:
            for e in elem1:
                my_link = driver.find_element_by_xpath("(//h3[@class='r']/a)[" + str(links_counter) + "]")
                print my_link.text
                my_link.click()
                driver.back()
                links_counter+=1
            my_page = driver.find_element_by_xpath("//a[text() = '" + str(page_counter) + "']")
            my_page.click()
            page_counter+=1

    def tearDown(self):
        self.driver.close()

if __name__=="__main__":
    unittest.main()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM