簡體   English   中英

web 抓取僅給出頁面上的前 4 個元素

[英]web scraping gives only first 4 elements on a page

我試圖取消此頁面上的搜索結果元素: https://shop.bodybuilding.com/search?q=protein+bar&selected_tab=Products with selenium 但它只給了我 4 個第一個元素作為結果。 我不確定為什么? 這是一個 javascript 頁面? 以及如何刪除此搜索頁面上的所有元素? 這是我創建的代碼:

import requests
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome(executable_path='C:/chromedriver')
url = 'https://shop.bodybuilding.com/search?q=protein+bar&selected_tab=Products'
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
all_items = soup.find_all('div', {'class': 'ProductTile ProductTile--flat Animate AnimateOnHover Animate--fade-in Animate--animated'})


for i in range(len(all_items)):
    prices=all_items[i].find('div', {'class': 'Price ProductTile__price'}).text
    names=all_items[i].find('p', {'class': 'ProductTile__title'}).text
    images=all_items[i].find('img')['src']
    url=all_items[i].find('a', {'class': 'Anchor ProductTile__image'})['href']

    print(images)

    
    

這是此頁面上名稱的結果,如您所見,它只刮掉了前 4 個元素!

BSN Protein Crisp Bars
Optimum Nutrition Protein Wafers
Herbaland Vegan Protein Gummies
Battle Bars Full Battle Rattle (FBR) Protein Bar

價格、圖片和 URL 都一樣嗎?

怎么修

您必須滾動,因此將加載所有項目:

last_height = driver.execute_script("return document.body.scrollHeight")

while True:
    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

    time.sleep(1)

    new_height = driver.execute_script("return document.body.scrollHeight")
    if new_height == last_height:
        break
    last_height = new_height

soup = BeautifulSoup(driver.page_source, 'html.parser')
all_items = soup.find_all('div', {'class': 'ProductTile ProductTile--flat Animate AnimateOnHover Animate--fade-in Animate--animated'})


for i in all_items:
    prices=i.find('div', {'class': 'Price ProductTile__price'}).text if i.find('div', {'class': 'Price ProductTile__price'}) else None
    names=i.find('p', {'class': 'ProductTile__title'}).text
    images=i.find('img')['src']
    url=i.find('a', {'class': 'Anchor ProductTile__image'})['href']

    print(images)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM