繁体   English   中英

用 BS4 刮

[英]scraping with BS4

代码生成空文件。 可能缺少正确的 div/tag 条目 (?)。 试图在一个站点上抓取多个页面。

import requests
from bs4 import BeautifulSoup
import pandas as pd

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36 Edg/91.0.864.71'}

questionlist = []

def getQuestions(tag, page):
    url = f'https://www.tradepractitioner.com/tag/{tag}'
    r = requests.get(url, headers=headers)
    soup = BeautifulSoup(r.text, 'html.parser')
    questions = soup.find_all('div', {'class': 'main grid '})
    for item in questions:
        question = {
        'title': item.find('a', {'class': 'post-title'}).text,
        'status': item.find('a', {'class': 'post-content'}).text,
         }
        questionlist.append(question)
    return

for x in range(1,5):
    getQuestions('cfius', x)
 

df = pd.DataFrame(questionlist)
df.to_excel('stackquestions.xlsx', index=False)
print('End.')

你有一个尾随空格:

代替:

questions = soup.find_all('div', {'class': 'main grid '})  # <- HERE " '"

经过:

questions = soup.find_all('div', {'class': 'main grid'})

现在你有另一个问题:

AttributeError: 'NoneType' object has no attribute 'text'

解决方案

questions = soup.find_all('article', {'class': 'post'})
for question in questions:
    question = {'title': question.find('h1', {'class': 'post-title'}).find('a').text,
                'status': question.find('section', {'class': 'post-content'}).find(text=True)}
    questionlist.append(question)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM