简体   繁体   中英

Python + selenium throws an error while clicking on the last next button

I've written some code in python with selenium to parse the name from a site. The site has got "next" button to get to its' next page. I've tried to manage this to run my script flawlessly. However, I'm facing two issues at this moment:

  1. Upon execution the scraper gets to the next page and parses from there leaving the starting page unscraped because i could not fix the logic.
  2. When it doesn't find the last grayed out next button it throws an error breaking the code.

Here is what I've tried so far:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)

driver.get("https://www.yellowpages.com/search?search_terms=pizza&geo_location_terms=San%20Francisco%2C%20CA&page=10")

while True:
    wait.until(EC.visibility_of_element_located((By.XPATH, '//li/a[contains(@class,"next")]')))

    item = driver.find_element_by_xpath('//li/a[contains(@class,"next")]')
    if not driver.find_element_by_xpath('//li/a[contains(@class,"next")]'):
        break
    item.click()

    wait.until(EC.visibility_of_element_located((By.XPATH, '//div[@class="info"]')))

    for items in driver.find_elements_by_xpath('//div[@class="info"]'):
        name = items.find_element_by_xpath('.//span[@itemprop="name"]').text
        print(name)

driver.quit()

Here is the element for grayed out next button:

<div class="pagination"><p><span>Showing</span>361-388
of 388<span>results</span></p><ul><li><a href="/search?search_terms=pizza&amp;geo_location_terms=San%20Francisco%2C%20CA&amp;page=12" data-page="12" data-analytics="{&quot;click_id&quot;:132}" data-remote="true" class="prev ajax-page" data-impressed="1">Previous</a></li><li><a href="/search?search_terms=pizza&amp;geo_location_terms=San%20Francisco%2C%20CA&amp;page=9" data-page="9" data-analytics="{&quot;click_id&quot;:132,&quot;module&quot;:1,&quot;listing_page&quot;:9}" data-remote="true" data-impressed="1">9</a></li><li><a href="/search?search_terms=pizza&amp;geo_location_terms=San%20Francisco%2C%20CA&amp;page=10" data-page="10" data-analytics="{&quot;click_id&quot;:132,&quot;module&quot;:1,&quot;listing_page&quot;:10}" data-remote="true" data-impressed="1">10</a></li><li><a href="/search?search_terms=pizza&amp;geo_location_terms=San%20Francisco%2C%20CA&amp;page=11" data-page="11" data-analytics="{&quot;click_id&quot;:132,&quot;module&quot;:1,&quot;listing_page&quot;:11}" data-remote="true" data-impressed="1">11</a></li><li><a href="/search?search_terms=pizza&amp;geo_location_terms=San%20Francisco%2C%20CA&amp;page=12" data-page="12" data-analytics="{&quot;click_id&quot;:132,&quot;module&quot;:1,&quot;listing_page&quot;:12}" data-remote="true" data-impressed="1">12</a></li><li><span class="disabled">13</span></li></ul></div>

Obviously you should try to switch scraping page and clicking 'Next' button . Also you might use try / except to avoid braking code:

while True:
    # Scraping required elements first
    items = wait.until(EC.visibility_of_all_elements_located((By.XPATH, '//div[@class="info"]')))
    for item in items:
        name = item.find_element_by_xpath('.//span[@itemprop="name"]').text
        print(name)
    # ...and then try to click 'Next' button
    try:
        driver.find_element_by_xpath('//li/a[contains(@class,"next")]').click()
    except:
        break

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM