簡體   English   中英

Javascript 生成的內容檢測使用 BeautifulSoup 和 Selenium

[英]Javascript generated content detection using BeautifulSoup and Selenium

我正在嘗試從 Pearson 的網站上獲取有關計算機科學的所有書籍(從 url 開始: https://www.pearson.com/us/higher-education/professional---career/computer-science/computer-science .html ),但每個類別中的書籍列表是通過 javascript 生成的。

我嘗試使用 Selenium 打開頁面,然后使用 BeautifulSoup 對其進行解析。 打開類別頁面后,我找不到包含有關書籍的所有信息的標簽。

from selenium.webdriver.support import expected_conditions as ec
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup

driver = webdriver.Safari()
driver.get('https://www.pearson.com/us/higher-education/professional---career/computer-science/computer-science.html')
wait = WebDriverWait(driver, 2)
content = driver.page_source
soup = BeautifulSoup(content)

#first I loop through categories
categories = list(driver.find_elements_by_xpath('//ul[@class="category-child-list-level-2"]//a'))
for i in range(len(categories)):
    print('CATEGORY : {}/170'.format(i+1))
    categories[i].click()
    while next_page_link != None:
    WebDriverWait(driver, 10).until(ec.visibility_of_element_located((By.CLASS_NAME, "content-tile-book-box")))
    soup = BeautifulSoup(driver.page_source, 'html.parser')
    print(soup.findAll('li', attrs={'class':'content-tile-book-box visible'})) #it results always empty
    for a in soup.findAll('li', attrs={'class':'content-tile-book-box visible'}):
        #I would like to have access to the books' links
        book_title_link = a.find_element_by_xpath('/div[@class="wrap-list-block"]//a')
    #loop through all the book pages of the current category
    next_page_link = driver.find_element_by_xpath('//a[@aria-label="Next"]')
    next_page_link.click()

希望你能幫助我,謝謝!

由於您需要在頁面之間來回導航,因此我在這里提供了 selenium 解決方案並且沒有使用 BS。我也使用了 chromedriver。

from selenium.webdriver.support import expected_conditions as ec
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from bs4 import BeautifulSoup
from selenium.common.exceptions import TimeoutException
from selenium.common.exceptions import NoSuchElementException

driver = webdriver.Chrome(executable_path='C:\\Selenium\\chromedriver.exe')
url = 'https://www.pearson.com/us/higher-education/professional---career/computer-science/computer-science.html'
driver.get(url)

#first I loop through categories
categories = list(driver.find_elements_by_xpath('//ul[@class="category-child-list-level-2"]//a'))
Total_Category = len(categories)
for i in range(Total_Category):
    WebDriverWait(driver, 10).until(ec.visibility_of_all_elements_located((By.XPATH, '//ul[@class="category-child-list-level-2"]//a')))
    categories = list(driver.find_elements_by_xpath('//ul[@class="category-child-list-level-2"]//a'))
    print('CATEGORY : {}/170'.format(i+1))
    categories[i].click()
    print("Category: " + categories[i].text)
    try:
        #loop through all the book pages of the current category
        WebDriverWait(driver, 10).until(ec.visibility_of_element_located((By.XPATH, "//a[@aria-label='Next']")))
        next_page_link = driver.find_element_by_xpath('//a[@aria-label="Next"]')
        while next_page_link != None:
            WebDriverWait(driver, 10).until(ec.visibility_of_element_located((By.CLASS_NAME, "content-tile-book-box")))
            soup = BeautifulSoup(driver.page_source, 'html.parser')
        #print(soup.findAll('li', attrs={'class':'content-tile-book-box visible'})) #it results always empty
        #for a in soup.findAll('li', attrs={'class':'content-tile-book-box visible'}):
            #I would like to have access to the books' links
        #   book_title_link = a.find_element_by_xpath('//div[@class="wrap-list-block"]//a')
            WebDriverWait(driver, 10).until(ec.visibility_of_any_elements_located((By.XPATH, "//div[@class='product-search-results-list section']//li")))
            links = driver.find_elements_by_xpath('//div[@class="wrap-list-block"]//a')
            print(len(links))
            book_links =[link.get_attribute('href') for link in links]
            #for link in links:
            print(book_links)
            try:
                next_page_link = driver.find_element_by_xpath('//a[@aria-label="Next"]')
            except NoSuchElementException as exception:
                print("Reached end of all books in this category")
                driver.get(url)#Go back to main listing
                break     
            next_page_link.click()
    except TimeoutException as exception:
        print("Next button is not available")
        WebDriverWait(driver, 10).until(ec.visibility_of_any_elements_located((By.XPATH, "//div[@class='product-search-results-list section']//li")))
        links = driver.find_elements_by_xpath('//div[@class="wrap-list-block"]//a')
        print(len(links))
        book_links =[link.get_attribute('href') for link in links]
        #for link in links:
        print(book_links)
        driver.get(url)#Go back to main listing

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM