简体   繁体   中英

How to loop over links and scrape the content of news articles with BeautifulSoup

I'm new in Python, I would like to get the content and the titles of all the news articles from this page: https://www.nytimes.com/search?query=china+COVID-19

However, the output of my current codes stored all the paragraphs from 10 articles into 1 list. I wonder how could I store each paragraph into a dict, which is the article it belongs to, and save all the dict into 1 list?

Any helps would be greatly appreciated!

import requests
from bs4 import BeautifulSoup
import json

response=requests.get('https://www.nytimes.com/search?query=china+COVID-19')
response.encoding='utf-8'
soupe=BeautifulSoup(response.text,'html.parser')

links = soupe.find_all('div', class_='css-1i8vfl5')

pagelinks = []
for link in links:
    url = link.contents[0].find_all('a')[0] 
 pagelinks.append('https://www.nytimes.com'+url.get('href')) 


articles=[]  

for i in pagelinks:
    response=requests.get(i)
    response.encoding='utf-8'
    soupe=BeautifulSoup(response.text,'html.parser') 
    for p in soupe.select('section.meteredContent.css-1r7ky0e div.css-53u6y8'):
        articles.append(p.text.strip())
print('\n'.join(articles))
import urllib3
from bs4 import BeautifulSoup as bs

def scrape(url):
    http = urllib3.PoolManager()
    response = http.request("GET", url)
    soup_page = bs(response.data, 'lxml') # you have to install lxml package
    # pip install lxml
    articles = []

    containers = soup_page.findAll("div", attrs={'class': "css-1i8vfl5"})

    for container in containers:
        title = container.find('h4', {'class':'css-2fgx4k'}).text.strip()
        description = container.find('p', {'class':'css-16nhkrn'})

        article = {
            'title':title,
            'description':description
        }

        articles.append(article)
    return articles

print(scrape("https://www.nytimes.com/search?query=china+COVID-19")[0] # to display the first article dict)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM