繁体   English   中英

如何将网络抓取的数据写入csv?

[英]How to write the web scraped data to csv?

我编写了以下代码来使用 BeautifulSoup I 提取表数据

import requests

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

from bs4 import BeautifulSoup
soup= BeautifulSoup(website, 'lxml')


table= soup.find('table')

table_rows = table.findAll('tr')

for tr in table_rows:
    td= tr.findAll('td')
    rows = [i.text for i in td]
    print(rows)

这是我的输出

['Number', '@name', 'Name', 'Followers', 'Influence Rank']
[]
['1', '@mashable', 'Pete Cashmore', '2037840', '59']
[]
['2', '@cnnbrk', 'CNN Breaking News', '3224475', '71']
[]
['3', '@big_picture', 'The Big Picture', '23666', '92']
[]
['4', '@theonion', 'The Onion', '2289939', '116']
[]
['5', '@time', 'TIME.com', '2111832', '143']
[]
['6', '@breakingnews', 'Breaking News', '1795976', '147']
[]
['7', '@bbcbreaking', 'BBC Breaking News', '509756', '168']
[]
['8', '@espn', 'ESPN', '572577', '187']
[]

请帮我将此数据写入 .csv 文件(我是此类任务的新手)

使用 csv 编写器。 将每一行写入 csv 文件。

import requests
import csv
from bs4 import BeautifulSoup

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

soup= BeautifulSoup(website, 'lxml')

table= soup.find('table')

table_rows = table.findAll('tr')

csvfile = 'twitterusers2.csv';

# Python 2
# with open(csvfile, 'wb') as outfile:
# Python 3 to ommit newline caracter
with open(csvfile, 'w', newline='') as outfile:
    wr = csv.writer(outfile)

    for tr in table_rows:
        td= tr.findAll('td')
        # Python 2 .encode("utf8") is mendatory sometimes playing with twitter data
        rows = [i.text.encode("utf8") for i in td]
        #ignore the empty elements and row td count not equal to 5
        if(len(rows) == 5):
            print(rows)
            wr.writerow(rows)

更好的解决方案是使用pandas因为它比其他库更快。 这是整个代码:

import requests
import pandas as pd 

website= requests.get('https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/').text

from bs4 import BeautifulSoup
soup= BeautifulSoup(website, 'lxml')

table= soup.find('table')

table_rows = table.findAll('tr')

first = True 

details_dict = {}

count = 0 

final_rows = []

for tr in table_rows:
    td= tr.findAll('td')
    rows = [i.text for i in td]
    #print(rows)
    
    for i in rows:
        if first == True:
            details_dict[i] = []
        else:
            key = list(details_dict.keys())[count]
            details_dict[key].append(i)
            count+=1 
    count = 0
    first = False 
    #print(details_dict)

df = pd.DataFrame(details_dict)
df.to_csv('D:\\Output.csv',index = False)

输出截图:

在此处输入图片说明

希望这会有所帮助!

最简单的方法是使用pandas

# pip install pandas lxml beautifulsoup4

import pandas as pd

URI = 'https://memeburn.com/2010/09/the-100-most-influential-news-media-twitter-accounts/'

# read and clean
data = pd.read_html(URI, flavor='lxml', skiprows=0, header=0)[0].dropna()

# save to csv called data
data.to_csv('data.csv', index=False, encoding='utf-8')

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM