简体   繁体   中英

How to scrape some links from a website using selenium

I've been trying to parse the links ended with 20012019.csv from a webpage using the below script but the thing is I'm always having timeout exception error. It occurred to me that I did things in the right way.

However, any insight as to where I'm going wrong will be highly appreciated.

My attempt so far:

from selenium import webdriver

url = 'https://promo.betfair.com/betfairsp/prices'

def get_info(driver,link):
    driver.get(link)
    for item in driver.find_elements_by_css_selector("a[href$='20012019.csv']"):
        print(item.get_attribute("href"))

if __name__ == '__main__':
    driver = webdriver.Chrome()
    try:
        get_info(driver,url)
    finally:
        driver.quit()

Your code is fine (tried it and it works), the reason you get a timeout is because the default timeout is 60s according to this answer and the page is huge.

Add this to your code before making the get request (to wait 180s before timeout):

driver.set_page_load_timeout(180)

You were close. You have to induce WebDriverWait for the the visibility of all elements located and you need to change the line:

for item in driver.find_elements_by_css_selector("a[href$='20012019.csv']"):

to:

for item in WebDriverWait(driver, 30).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "a[href$='20012019.csv']"))):

Note : You have to add the following imports :

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM