繁体   English   中英

如何遍历隐藏的 div 并抓取文本?

[英]How to iterate through hidden divs and scrape text?

我正在尝试使用可扩展的 div 来抓取网站,其中隐藏了我正在尝试抓取的文本。 我只能抓取第一个可扩展 div 中的文本。 但是,我可以单击所有 div。 如何从所有 div 中抓取文本?

关闭的 HTML:

<li class="views-row views-row-1 pub1 default-on clk" tabindex="150">  
          <div class="teaser Speeches col-xs-12 col-sm-12 col-md-12 col-lg-12 crop2" data-nid="50849" data-tid="6971" aria-hidden="false">
  <div class="thumb" style="padding-top: 0px; padding-bottom: 0px;">
  <img class="img-responsive" src="/sites/pm/files/styles/news_listing_square/public/default_news/20180501_default_news2.jpg?itok=a1pfZTOA" alt="" title=""></div>
  <div class="news-teaser">
    <div class="title">TITLE</div>
    <div class="body">TEASER TEXT</div>
    <div class="category">Speeches<br>PLACE <span class="date-display-single" property="dc:date" datatype="xsd:dateTime" content="2019-06-10T18:15:00-04:00">June 10, 2019</span></div>
  </div>
</div>
<div class="sticky0"></div>
<div class="full-article" aria-hidden="true"></div>  
</li>
<li class="views-row views-row-2 pub1 default-on clk" tabindex="150"> </li>
<li class="views-row views-row-3 pub1 default-on clk" tabindex="150"> </li>

单击项目并可以看到完整的语音时:

<li class="views-row views-row-1 pub1 default-on clk active" tabindex="150">     
          <div class="news-article-body-fields">    
          <h1 class="field-content">TITLE</h1>    
              
          <div class="image col-xs-12 col-sm-12 col-md-12 col-lg-12 news-image-caption">
<span class="caption"></span>
</div>    
          <span class="field-content Speeches-news-article-date"><div class="inline-date">
  PLACE <span class="date-display-single" property="dc:date" datatype="xsd:dateTime" content="2019-06-10T18:15:00-04:00">June 10, 2019</span>
</div></span>    
  <div class="views-field views-field-body">        <p><span lang="EN-CA" xml:lang="EN-CA">CHECK AGAINST DELIVERY</span></p><p><span lang="EN-CA" xml:lang="EN-CA">Good morning, everyone. </span></p><p><span lang="EN-CA" xml:lang="EN-CA">Before we get into things, I want to take a second to thank ____ – for his introduction, yes, but more importantly, for his leadership. </p> SPEECHES CONTINUE IN <P> TAGS. 

这是我的 Python 脚本:

# Libraries
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import requests
import time

# Opening up connection and grabbing HTML file via Chrome
url = 'https://pm.gc.ca/eng/news/speeches'
browser = webdriver.Chrome()
browser.get(url)

# Delaying scrapper to prevent scraper from closing too soon
browser.implicitly_wait(2)

# Creating loop to open up all divs with same class name
article_list = browser.find_elements_by_css_selector(".views-row.pub1.default-on.clk")

# All titles for expanded divs printed. Works!
for article in article_list:
    print(article.text)


# Only works for first article in list
for article in article_list:
    article.click()
    
    time.sleep(3)
    
    # Getting title
    title = browser.find_element_by_xpath("//h1[@class = 'field-content']")
    print(title.text)   

    # Getting date
    date = browser.find_element_by_class_name("date-display-single")
    print(date.text)

    # Getting place
    place = browser.find_element_by_xpath("//div[@class = 'inline-date']")
    print(place.text)

    # Getting speech
    speech_div = browser.find_elements_by_xpath("//span[@lang = 'EN-CA']")
    
    for p in speech_div:
        print(p.text)

目前,我可以为第一篇文章抓取整个演讲。 驱动程序然后点击下一个可扩展 div 中的第二个语音,输出一大堆空白,并以与第二个语音相同的方式(整堆空白)继续接下来的几个演讲。

任何帮助,将不胜感激!

您需要将搜索范围限定为当前 div,而不是整个文档。 在当前元素( article而不是browser )上调用find*

title = article.find_element_by_xpath("//h1[@class = 'field-content']")
speech_div = article.find_elements_by_xpath("//span[@lang = 'EN-CA']")

使用 AJAX 请求加载语音详细信息。 这意味着您甚至不必为此使用硒,仅requests就足够了,这可以大大加快速度:

import requests
from bs4 import BeautifulSoup

headers = {
    'User-Agent':  'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0'
}


def make_soup(url: str) -> BeautifulSoup:
    res = requests.get(url, headers=headers)
    res.raise_for_status()
    return BeautifulSoup(res.text, 'html.parser')


def fetch_speech_details(speech_id: str) -> str:
    url = f'https://pm.gc.ca/eng/views/ajax?view_name=news_article&view_display_id=block&view_args={speech_id}'
    res = requests.get(url, headers=headers)
    res.raise_for_status()
    data = res.json()
    html = data[1]['data']
    soup = BeautifulSoup(html, 'html.parser')
    body = soup.select_one('.views-field-body')
    return str(body)


def scrape_speeches(soup: BeautifulSoup) -> dict:
    speeches = []
    for teaser in soup.select('.teaser'):
        title = teaser.select_one('.title').text.strip()
        speech_id = teaser['data-nid']
        speech_html = fetch_speech_details(speech_id)
        s = {
            'title': title,
            'details': speech_html
        }
        speeches.append(s)


if __name__ == "__main__":
    url = 'https://pm.gc.ca/eng/news/speeches'
    soup = make_soup(url)
    speeches = scrape_speeches(soup)
    from pprint import pprint
    pprint(speeches)

输出:

[
    {'title': 'PM remarks for Lunar Gateway', 'details': '<div class="views-field views-field-body"> <p>CHECK AGAINST DELIVERY</p><p>Hello everyone!</p><p>I’m delighted to be here at the Canadian Space Agency to share some great news with Canadians.</p><p>I’d like to start by thanking the President of the Agency, Sylvain Laporte ... },
    {...},
    ....
]

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM