简体   繁体   中英

How to open and scrape multiple links with Selenium

I am new to scraping with Python and have encountered a weird issue.

I am attempting to scrape of OCR'd newspaper articles from a list of URLS using selenium -- the proxy settings on the data source make this easier than other options.

However, I receive tracebacks for the text data every time I run my code. Here is the code that I am using:

article_links = []
for link in driver.find_elements_by_xpath('/html/body/div[1]/main/section[1]/ul[2]/li[*]/div[2]/div[1]/h3/a'):
    links = link.get_attribute("href")
    article_links.append(links)

articles = []
for article in article_links:
    driver.switch_to.window(driver.window_handles[-1])
    driver.get(article)
    driver.find_element_by_css_selector("#js-doc-explorer-show-additional-views").click()
    time.sleep(1)
    for article_text in driver.find_elements_by_css_selector("#ocr-container > div.fulltext-ocr.js-page-ocr"):
        articles.append(article_text)

I come closest to solving the issue by using.click(), which opens a hidden panel for my data. However, upon using this code, the only data that fills is the last row in the dataset. Without the.click(), all rows come back with nothing. Changing the sleep settings also does not help.

The Xpath for the text data is:

/html/body/div[2]/main/section/div[2]/div[2]/section[2]/div/div[4]/text()

Alternatively, is there a way to get each link's source code and parse it with beautifulsoup after the fact?

UPDATE: There has to be something wrong with the loops -- I can get either the first or last values, but nothing in between.

In a more recent version of Selenium, the method find_elements_by_xpath() is deprecated. Is that the issue you are facing? If it is, import from selenium.webdriver.common.by import By and change it to find_elements(By.XPATH, ...) Similarly, find_elements_by_css_selector() is replaced with find_elements(By.CSS_SELECTOR, ...)

You don't specify if this is even the issue, but if it is, I hope this helps:-)

The solution is found by calling the relevant (unique) class and specifying that it must contain text.

news = []
    for article in article_links:
        driver2.get(article)
        driver2.find_element(By.CSS_SELECTOR, "#js-doc-explorer-show-additional-views").click()
        article_text = driver2.find_element(By.XPATH, '//div[@class="fulltext-ocr js-page-ocr"][contains(text()," ")]')
        news.append([article_text.text]) 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM