繁体   English   中英

"我如何从网站的 Javascript 内容中抓取数据?"

[英]How can I, scrape data from a Javascript Content of a website?

实际上,我正在尝试从 Nykaa 网站的产品描述中获取内容。<\/strong><\/em>

网址:- https:\/\/www.nykaa.com\/nykaa-skinshield-matte-foundation\/p\/460512?productId=460512&pps=​​1&skuId=460502<\/a>

这是 URL,在产品描述部分,单击“阅读更多”按钮,最后有一些文本。<\/strong><\/em>

我要提取<\/strong>的文本<\/strong>是:

探索 Nykaa 上提供的所有 Foundation 范围。 在这里购买更多 Nykaa Cosmetics 产品。您可以浏览 Nykaa Cosmetics Foundation 的完整世界。 或者,您还可以从 Nykaa SkinShield 抗污染哑光粉底系列中找到更多产品。

到期日:2024 年 2 月 15 日

原产国:印度

制造商\/进口商\/品牌名称:FSN E-commerce Ventures Pvt Ltd

制造商\/进口商\/品牌地址:104 Vasan Udyog Bhavan Sun Mill Compound Senapati Bapat Marg, Lower Parel, Mumbai City Maharashtra - 400013

检查页面后,当我“禁用 javascript”时,“产品描述”中的所有内容都消失了。 这意味着内容是在 Javascript 的帮助下动态加载的。

为此,我使用了“硒”。<\/strong> 这就是我尝试过的。<\/strong>

这是正在显示的输出。

请任何人帮助我解决这个问题,或者任何其他要编写的特定代码,我缺少从 Product description 获取文本内容<\/strong>。 这将是一个很大的帮助。

谢谢🙏🏻。

你可以做这样的事情

from msilib.schema import Error
from tkinter import ON
from turtle import goto
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import numpy as np
from random import randint
import pandas as pd
import requests
import csv

browser = webdriver.Chrome(
r'C:\Users\paart\.wdm\drivers\chromedriver\win32\97.0.4692.71\chromedriver.exe')


browser.maximize_window()  # For maximizing window
browser.implicitly_wait(20)  # gives an implicit wait for 20 seconds

browser.get(
    "https://www.nykaa.com/nykaa-skinshield-matte-foundation/p/460512?productId=460512&pps=1&skuId=460502")


# Creates "load more" button object.
browser.implicitly_wait(20)
loadMore = browser.find_element_by_xpath("/html/body/div[1]/div/div[3]/div[1]/div[2]/div/div/div[2]")

loadMore.click()
browser.implicitly_wait(20)

desc_data = browser.find_elements_by_id('content-details')

for desc in desc_data:
    para_details = browser.find_element_by_xpath('//*[@id="content-details"]/p[1]').text
    expiry = browser.find_element_by_xpath('//*[@id="content-details"]/p[2]').text
    country = browser.find_element_by_xpath('//*[@id="content-details"]/p[3]').text
    importer = browser.find_element_by_xpath('//*[@id="content-details"]/p[4]').text
    address = browser.find_element_by_xpath('//*[@id="content-details"]/p[5]').text
    print(para_details, country, importer, address)

对于 desc_data 您正在寻找带有该字符串的类名,当页面上没有类名时,它实际上是带有该字符串的 id 标记。

在 for 循环中,您在 find_elements_by_xpath() 中插入了一堆 xpath,它只需要一个 xpath 到一个元素。

尝试

from msilib.schema import Error
from tkinter import ON
from turtle import goto
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import numpy as np
from random import randint
import pandas as pd
import requests
import csv

browser = webdriver.Chrome(
    r'C:\Users\paart\.wdm\drivers\chromedriver\win32\97.0.4692.71\chromedriver.exe')

browser.maximize_window()  # For maximizing window
browser.implicitly_wait(20)  # gives an implicit wait for 20 seconds

browser.get(
    "https://www.nykaa.com/nykaa-skinshield-matte-foundation/p/460512?productId=460512&pps=1&skuId=460502")

# Creates "load more" button object.
browser.implicitly_wait(20)
loadMore = browser.find_element_by_xpath(xpath='//div [@class="css-mqbsar"]')
loadMore.click()

browser.implicitly_wait(20)
desc_data = browser.find_elements_by_xpath('//div[@id="content-details"]/p')

# desc_data = browser.find_elements_by_class_name('content-details')
# here in your previous code this class('content-details') which is a single element so it is not iterable
# I used xpath to locate every every element <p> under the (id="content-details) attrid=bute

for desc in desc_data:
    para_detail = desc.text
    print(para_detail)

# if you you want to specify try this
#  para_detail = desc_data[0].text
#  expiry_ date = desc_data[1].text


并且不要只是从 chrome 开发工具中复制 XPath,它对于动态内容是不可靠的。

您收到此错误是因为在执行 click 功能时该元素未正确加载。 我使用这两个函数来定位元素:

def find_until_located(eltype,name):
    element = WebDriverWait(driver, 60).until(
    EC.presence_of_element_located((eltype, name)))
    return element
def find_until_clicklable(eltype,name):
    element=WebDriverWait(driver, 60).until(EC.element_to_be_clickable((eltype, name)))
    return element

最终答案 - 这个问题。<\/strong><\/em>

from msilib.schema import Error
from tkinter import ON
from turtle import goto
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import numpy as np
from random import randint
import pandas as pd
import requests
import csv

browser = webdriver.Chrome(
    r'C:\Users\paart\.wdm\drivers\chromedriver\win32\97.0.4692.71\chromedriver.exe')


browser.maximize_window()  # For maximizing window
browser.implicitly_wait(20)  # gives an implicit wait for 20 seconds

# browser.get(
#     "https://www.nykaa.com/nykaa-skinshield-matte-foundation/p/460512?productId=460512&pps=1&skuId=460502")

browser.get(
    "https://www.nykaa.com/kay-beauty-hydrating-foundation/p/1229442?productId=1229442&pps=3&skuId=772975")

browser.execute_script("document.body.style.zoom='50%'")
browser.execute_script("document.body.style.zoom='100%'")


# Creates "load more" button object.
browser.implicitly_wait(20)
loadMore = browser.find_element(By.XPATH,
                                "/html/body/div[1]/div/div[3]/div[1]/div[2]/div/div/div[2]")

loadMore.click()
browser.implicitly_wait(20)

desc_data = browser.find_elements(By.ID, 'content-details')

for desc in desc_data:
    para_details = browser.find_element(By.XPATH,
                                        '//*[@id="content-details"]/p[1]').text
    expiry = browser.find_element(By.XPATH,
                                  '//*[@id="content-details"]/p[2]').text
    country = browser.find_element(By.XPATH,
                                   '//*[@id="content-details"]/p[3]').text
    importer = browser.find_element(By.XPATH,
                                    '//*[@id="content-details"]/p[4]').text
    address = browser.find_element(By.XPATH,
                                   '//*[@id="content-details"]/p[5]').text
    # print(para_details, country, importer, address)
    print(f"{para_details} \n")
    print(f"{expiry} \n")
    print(f"{country} \n")
    print(f"{importer} \n")
    print(f"{address} \n")

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM