簡體   English   中英

如何使用 Beautiful Soup 從網站檢索信息?

[英]How to retrieve information from a website using Beautiful Soup?

我遇到了一項任務,我必須使用爬蟲從網站檢索信息。 (網址: https://www.onepa.gov.sg/cat/adventure

該網站有多種產品。 對於每個產品,它都包含將我們定向到該單個產品的網頁的鏈接,我想收集所有鏈接。

網頁截圖

HTML 代碼的屏幕截圖

例如,其中一個產品的名稱為:KNOTTY STUFF,我希望得到 /class/details/c026829364 的 href

import requests
from bs4 import BeautifulSoup


def get_soup(url):
    source_code = requests.get(url)
    plain_text = source_code.text
    soup = BeautifulSoup(plain_text, features="html.parser")
    return soup

url = "https://www.onepa.gov.sg/cat/adventure"
soup = get_soup(url)
for i in soup.findAll("a", {"target": "_blank"}):
    print(i.get("href"))

The output is https://tech.gov.sg/report_vulnerability https://www.pa.gov.sg/feedback Which does not include what I was looking for: /class/details/c026829364

我感謝任何幫助或幫助,謝謝!

這是因為頁面使用動態 javascript來准備跨度鏈接。 因此,您將無法使用普通requests來完成它。

相反,您應該使用 selenium 和 webdriver 在抓取之前加載所有鏈接。

您可以嘗試在此處下載 ChromeDriver 可執行文件。 如果將其粘貼到與腳本相同的文件夾中,則可以運行:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import WebDriverException
import os

chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--window-size=1920x1080")
chrome_options.add_argument("--headless")
chrome_driver = os.getcwd() + "\\chromedriver.exe"  # CHANGE THIS PATH IF NOT SAME FOLDER
driver = webdriver.Chrome(options=chrome_options, executable_path=chrome_driver)

url = "https://www.onepa.gov.sg/cat/adventure"
driver.get(url)

try:
    # Waint links to be ready
    WebDriverWait(driver, 10).until(
        EC.element_to_be_clickable((By.CSS_SELECTOR, ".gridTitle > span > a"))
    )
except WebDriverException:
    print("Page offline")  # Added this because page is really unstable :(

elements = driver.find_elements_by_css_selector(".gridTitle > span > a")
links = [elem.get_attribute('href') for elem in elements]
print(links)

該網站是動態加載的,因此requests將不支持它。 但是,可以通過向以下位置發送POST請求來獲得這些鏈接:

https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard

嘗試使用內置的re (regex) 模塊搜索鏈接

import re
import requests


URL = "https://www.onepa.gov.sg/sitecore/shell/WebService/Card.asmx/GetCategoryCard"

headers = {
    "authority": "www.onepa.gov.sg",
    "accept": "application/json, text/javascript, */*; q=0.01",
    "x-requested-with": "XMLHttpRequest",
    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
    "content-type": "application/json; charset=UTF-8",
    "origin": "https://www.onepa.gov.sg",
    "sec-fetch-site": "same-origin",
    "sec-fetch-mode": "cors",
    "sec-fetch-dest": "empty",
    "referer": "https://www.onepa.gov.sg/cat/adventure",
    "cookie": "visid_incap_2318972=EttdbbMDQMeRolY+XzbkN8tR5l8AAAAAQUIPAAAAAAAjkedvsgJ6Zxxk2+19JR8Z; SC_ANALYTICS_GLOBAL_COOKIE=d6377e975a10472b868e47de9a8a0baf; _sp_ses.075f=*; ASP.NET_SessionId=vn435hvgty45y0fcfrold2hx; sc_pview_shuser=; __AntiXsrfToken=30b776672938487e90fc0d2600e3c6f8; BIGipServerpool_PAG21PAPRPX00_443=3138016266.47873.0000; incap_ses_7221_2318972=5BC1VKygmjGGtCXbUiU2ZNRS5l8AAAAARKX8luC4fGkLlxnme8Ydow==; font_multiplier=0; AMCVS_DF38E5285913269B0A495E5A%40AdobeOrg=1; _sp_ses.603a=*; SC_ANALYTICS_SESSION_COOKIE=A675B7DEE34A47F9803ED6D4EC4A8355|0|vn435hvgty45y0fcfrold2hx; _sp_id.603a=d539f6d1-732d-4fca-8568-e8494f8e584c.1608930022.1.1608930659.1608930022.bfeb4483-a418-42bb-ac29-42b6db232aec; _sp_id.075f=5e6c62fd-b91d-408e-a9e3-1ca31ee06501.1608929756.1.1608930947.1608929756.73caa28b-624c-4c21-9ad0-92fd2af81562; AMCV_DF38E5285913269B0A495E5A%40AdobeOrg=1075005958%7CMCIDTS%7C18622%7CMCMID%7C88630464609134511097093602739558212170%7CMCOPTOUT-1608938146s%7CNONE%7CvVersion%7C4.4.1",
}

data = '{"cat":"adventure", "subcat":"", "sort":"", "filter":"[filter]", "cp":"[cp]"}'

response = requests.post(URL, data=data,  headers=headers)
print(re.findall(r"<Link>(.*)<", response.content.decode("unicode_escape")))

Output:

['/class/details/c026829364', '/interest/details/i000027991', '/interest/details/i000009714']

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM