簡體   English   中英

BeautifulSoup find_all function 在 main 內部不起作用

[英]BeautifulSoup find_all function doesn't work inside of main

我正在嘗試在 conforama 網站上刪除並這樣做,我正在使用 BeautifulSoup。我正在嘗試檢索價格、描述、費率、url 和該項目的評論數量並這樣做遞歸在 3 頁上。

首先,我導入所需的庫

import csv
from bs4 import BeautifulSoup
import pandas as pd
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager

我定義了第一個 function: get_url,它將使用特定的 search_term 正確格式化 url,並返回等待使用正確頁碼格式化的 url

def get_url(search_term):
    template = 'https://www.conforama.fr/recherche-conforama/{}'
    
    search_term = search_term.replace(' ','+')
    
    url = template.format(search_term)
    
    url+= '?P1-PRODUCTS%5Bpage%5D={}'
    
    return url

我定義了第二個來去掉一些使數據不可讀的內容

def format_number(number):
    new_number = ''
    for n in number:
        if n not in '0123456789€,.' : return new_number
        new_number+=n

我定義了第三個 function,它將記錄並從中提取我需要的所有信息:它的價格、描述、url、評級和評論數量。

def extract_record(item):
    print(item)
    descriptions = item.find_all("a", {"class" : "bindEvent"})

    description = descriptions[1].text.strip() + ' ' + descriptions[2].text.strip()

    #get url of product
    url = descriptions[2]['href']
    print(url)

    #number of reviews
    nor = descriptions[3].text.strip()
    nor = format_number(nor)

    #rating
    try:
        ratings = item.find_all("span", {"class" : "stars"})
        rating = ratings[0]['data']
    except AttributeError:
        return

    #price
    try:
        prices = item.find_all("div", {"class" : "price-product"})
        price = prices[0].text.strip()
    except AttributeError:
        return
    price = format_number(price)
    
    return (description, price, rating, nor, url)

最后,我將所有函數收集在一個 main function 中,這將允許我迭代我需要從中提取的所有頁面

def main(search_term):
    #product_name = search_term
    
    driver = webdriver.Chrome(ChromeDriverManager().install())
    records = []
    url = get_url(search_term)
    somme = 0
    for page in range (1,4):
       driver.get(url.format(page))
       soup = BeautifulSoup(driver.page_source, 'html.parser')
       print('longueur soup', len(soup))
       print(soup)
       results = soup.find_all('li', {'class' : 'ais-Hits-item box-product fragItem'})
       print(len(results))
       somme+=len(results)
       for result in results:
           record = extract_record(result)
           if record:
               print(record)
               records.append(record)
    driver.close()
    print('somme',somme)

現在的問題是,當我一一運行所有命令時:

driver = webdriver.Chrome(ChromeDriverManager().install())
url = get_url('couch').format(1)
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
results = soup.find_all('li', {'class' : 'ais-Hits-item box-product fragItem'})
item = results[0]
extracted = extract_record(item)

一切都很好,extract_record function 返回的正是我所需要的。 但是,當我運行 main function 時,這行代碼:

results = soup.find_all('li', {'class' : 'ais-Hits-item box-product fragItem'})

不返回任何結果,即使我知道當我在主 function 之外執行它時它會返回任何結果

有沒有人有同樣的問題,你知道我做錯了什么以及如何解決嗎? 非常感謝閱讀和嘗試回答

怎么了?

主要問題是元素需要一些時間才能生成/顯示,並且在您獲取driver.page_source時它們不可用。

怎么修?

使用seleniums 等待直到找到特定元素:

wait = WebDriverWait(driver, 10)
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, 'li.ais-Hits-item.box-product.fragItem div.price-product')))
soup = BeautifulSoup(driver.page_source, 'html.parser')
results = soup.find_all('li', {'class' : 'ais-Hits-item box-product fragItem'})

例子

...
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

...

def main(search_term):
    #product_name = search_term
    
    driver = webdriver.Chrome(ChromeDriverManager().install())
    records = []
    url = get_url(search_term)
    somme = 0
    for page in range (1,4):
        driver.get(url.format(page))
        print(url.format(page))
        wait = WebDriverWait(driver, 10)
        wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, 'li.ais-Hits-item.box-product.fragItem div.price-product')))
        soup = BeautifulSoup(driver.page_source, 'html.parser')
        results = soup.find_all('li', {'class' : 'ais-Hits-item box-product fragItem'})
        somme+=len(results)
        for result in results:
            record = extract_record(result)
            if record:
                print(record)
                records.append(record)
    driver.close()
    print('somme',somme)

main('matelas')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM