繁体   English   中英

如何使用BeautifulSoup在span h5 a href链接内提取标题

[英]How to extract title inside span h5 a href link using BeautifulSoup

我正在尝试使用 BeautifulSoup 提取链接的标题。 我正在使用的代码如下:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

hdr={'User-Agent':'Chrome/84.0.4147.135'}

frame=[]

for page_number in range(19):
    http= "https://www.epa.wa.gov.au/media-statements?page={}".format(page_number+1)

    print('Downloading page %s...' % http)

    url= requests.get(http,headers=hdr)
    soup = BeautifulSoup(url.content, 'html.parser')

    for row in soup.select('.view-content .views-row'):

        content = row.select_one('.views-field-body').get_text(strip=True)
        title = row.text.strip(':')
        link = 'https://www.epa.wa.gov.au' + row.a['href']
        date = row.select_one('.date-display-single').get_text(strip=True)

        frame.append({
        'title': title,
        'link': link,
        'date': date,
        'content': content
    })

dfs = pd.DataFrame(frame)
dfs.to_csv('epa_scrapper.csv',index=False,encoding='utf-8-sig')

但是,运行上述代码后,没有任何显示。 如何提取存储在链接中的锚标记的标题属性中存储的值?

另外,我只想知道如何将“标题”、“链接”、“dt”、“内容”附加到 csv 文件中。

非常感谢你。

要获取链接文本,您可以使用选择器"h5 a" 例如:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

hdr={'User-Agent':'Chrome/84.0.4147.135'}

frame=[]
for page_number in range(1, 20):
    http= "https://www.epa.wa.gov.au/media-statements?page={}".format(page_number)

    print('Downloading page %s...' % http)

    url= requests.get(http,headers=hdr)
    soup = BeautifulSoup(url.content, 'html.parser')

    for row in soup.select('.view-content .views-row'):

        content = row.select_one('.views-field-body').get_text(strip=True, separator='\n')
        title = row.select_one('h5 a').get_text(strip=True)
        link = 'https://www.epa.wa.gov.au' + row.a['href']
        date = row.select_one('.date-display-single').get_text(strip=True)

        frame.append({
            'title': title,
            'link': link,
            'date': date,
            'content': content
        })

dfs = pd.DataFrame(frame)
dfs.to_csv('epa_scrapper.csv',index=False,encoding='utf-8-sig')

创建epa_scrapper.csv (来自 LibreOffice 的截图):

在此处输入图片说明

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM