简体   繁体   中英

Iterating google search results using python selenium

I want to iterate clicking the google search results and copy menus of each site. So far, i am through copying the menus and returning back to the results page but couldn't iterate clicking the results.For now, i would like to learn iterating search results alone but I'm stuck at stale element reference exception, i did see few other sources but no luck.

from selenium import webdriver
chrome_path = r"C:\Users\Downloads\chromedriver_win32\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get('https://www.google.com?q=python#q=python')
weblinks = driver.find_elements_by_xpath("//div[@class='g']//a[not(@class)]");
for links in weblinks[0:9]:
    links.get_attribute("href")
    print(links.get_attribute("href"))
    links.click()
    driver.back()

StaleElementReferenceException means that elements you are referring to do not exist anymore. That usually happens when page is automatically redrawn. In your case, you change the page and navigate back, so elements would be redrawn 100%.

Default solution is to search the list inside the loop every time.

If you want to be sure that list is same every iteration, you need to add some additional check (compare texts, etc.)

If you use this code for scraping, probably you don't need back navigation. Just open every page directly with driver.get(href)

Here you can find code example: How to open a link in new tab (chrome) using Selenium WebDriver?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM