简体   繁体   中英

Python - Selenium get stuck

I'm trying to make a script in python that's going to download all the files from a website that I'm asking for. I'm new to Selenium so I'm not sure if is something wrong or not but.

for link in thredds_links:
    current_ind += 1
    print("LINK: ", current_ind, len(thredds_links))
    driver.get(link)
    data = driver.find_elements_by_partial_link_text(".nc")
    data_link = [l.get_attribute('href') for l in data]

    current_ind_2 = 0
    for d in data_link:
        current_ind_2 += 1
        print("LINK_2: ", current_ind_2, len(data_link))

        # link_d = d.get_attribute('href')
        driver.get(d)

        download_link = driver.find_element_by_link_text("HTTPServer").get_attribute('href')
        driver.get(download_link)

        driver.find_element_by_class_name("custom-combobox-input").send_keys("USER_NAME")
        driver.find_element_by_id("SubmitButton").click()
        driver.find_element_by_id("password").send_keys("SOME_PASSWORD")
        driver.find_element_by_class_name("button").click()

In the first for-loop, I have 10 links with each link contain another 10-14 files to be downloaded into the second for-loop. But for some reason, the Firefox is going to get stuck at the second link from the second for-loop and it's going to crash of timeout after a while even if I have all the link from that list correctly.

I have a very similar script that downloads multiple pdfs from a link and moves on to the next and I had the same problem....I fixed it at my end by looping through a range(len(iterable)) so in your case i think it would be

for d in range(len(data_link)):

Give it a try....Kuda

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM