[英]How do I iterate and visit each link in a list using selenium and python? How do I click on each link on a Yelp page
[英]How do I iterate through each google search page using Selenium Python,but its not happening
我試圖遍歷每個頁面,但是下面的代碼對我不起作用。
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
print len(pages)
counter=1
for page in pages:
counter+=1
page.click()
您的代碼將僅首次成功運行,也就是單擊第二頁,然后在此行上引發Stale Element Reference Exception-
page.click()
現在,為什么呢? 這是因為page
WebElement只是單擊之前確定的元素pages
列表的成員。 由於單擊一次分頁按鈕后,DOM已更改 ,因此對您之前位於的元素的引用不再具有意義。
為了解決這個問題,您需要每次DOM更改時(即每次單擊分頁按鈕時)一次又一次找到分頁按鈕。 一個簡單的解決方案是使用您的counter
變量來遍歷列表。 這是完整的代碼-
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome(executable_path=r'//path to driver')
driver.get("google url")
driver.find_element_by_id("lst-ib").send_keys("search")
driver.find_element_by_id("lst-ib").send_keys(Keys.ENTER)
driver.maximize_window()
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=1
for page in pages:
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter+=1
pages[counter].click()
另一種(更好的)解決方案是通過按鈕的文字來識別分頁按鈕-
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2 #starting from 2
for page in pages:
driver.find_element_by_xpath("//a[text() = '" + str(counter) + "']").click()
counter+=1
您也可以嘗試按“下一步”按鈕:
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
counter=2 #starting from 2
for page in pages:
driver.find_element_by_xpath("//span[text()='Next']").click()
counter+=1
編輯-
我修正了您的最終代碼。 我重命名了一些變量,以免引起混淆,並用顯式等待替換了隱式等待 。
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import time
class GoogleEveryFirstLink(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Chrome(executable_path=r'D:\Test automation\chromedriver.exe')
self.driver.get("http://www.google.com")
def test_Hover_Facebook(self):
driver = self.driver
self.assertIn("Google",driver.title)
elem=driver.find_element_by_id("lst-ib")
elem.clear()
elem.send_keys("India")
elem.send_keys(Keys.RETURN)
page_counter=2
links_counter=1
wait = WebDriverWait(driver,20)
wait.until(EC.element_to_be_clickable((By.XPATH,"(//h3[@class='r']/a)[" + str(links_counter) + "]")))
pages=driver.find_elements_by_xpath("//*[@id='nav']/tbody/tr/td/a")
elem1=driver.find_elements_by_xpath("//h3[@class='r']/a")
print len(elem1)
print len(pages)
driver.maximize_window()
for page in pages:
for e in elem1:
my_link = driver.find_element_by_xpath("(//h3[@class='r']/a)[" + str(links_counter) + "]")
print my_link.text
my_link.click()
driver.back()
links_counter+=1
my_page = driver.find_element_by_xpath("//a[text() = '" + str(page_counter) + "']")
my_page.click()
page_counter+=1
def tearDown(self):
self.driver.close()
if __name__=="__main__":
unittest.main()
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.