簡體   English   中英

Python Web將多頁表抓取到csv和DF進行分析

[英]Python Web-scraping multiple page table to csv and DF for analysis

當我嘗試對網頁進行換景時,它僅將表格從第10頁輸出到csv文件,在此我想將每頁的結果發送到該文件。 我知道我可能在這里犯了一個非常簡單的錯誤。 任何人都可以在這里以正確的方式指導我,謝謝,我感謝您的投入。

import pandas as pd
import requests
from bs4 import BeautifulSoup
from tabulate import tabulate

#transactions over the last 17hrs 
#Looping through page nimbers using url manipulation
#for i in range(1,100,1):

dfs = []

url = "https://etherscan.io/txs?p="
for index in range(1, 10, 1):
    res = requests.get(url+str(index))
    soup = BeautifulSoup(res.content,'lxml')
    table = soup.find_all('table')[0] 
    df = pd.read_html(str(table))

    dfs.append(df)
    #df[0].to_csv('Desktop/scrape.csv')

final_df[0] = pd.concat(dfs)
final_df[0].to_csv('Desktop/scrape.csv')
print( tabulate(df[0], headers='keys', tablefmt='psql'))

我收到以下類型錯誤。

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-c6a3a8b0cd1d> in <module>()
     20     #df[0].to_csv('Desktop/scrape.csv')
     21 
---> 22 final_df[0] = pd.concat(dfs)
     23 final_df[0].to_csv('Desktop/scrape.csv')
     24 print( tabulate(df[0], headers='keys', tablefmt='psql'))

~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/concat.py in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, copy)
    204                        keys=keys, levels=levels, names=names,
    205                        verify_integrity=verify_integrity,
--> 206                        copy=copy)
    207     return op.get_result()
    208 

~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/concat.py in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy)
    261         for obj in objs:
    262             if not isinstance(obj, NDFrame):
--> 263                 raise TypeError("cannot concatenate a non-NDFrame object")
    264 
    265             # consolidate

TypeError: cannot concatenate a non-NDFrame object

您只是在代碼中缺少一行。 pd.read_html將返回DataFrame列表。 因此,只需在附加到dfs之前先進行concat。

dfs = []

url = "https://etherscan.io/txs?p="
for index in range(1, 10):
    res = requests.get(url+str(index), proxies=proxyDict)
    soup = BeautifulSoup(res.content, 'lxml')
    table = soup.find_all('table')[0]
    df_list = pd.read_html(str(table))
    df = pd.concat(df_list)  # this line is what you're missing
    dfs.append(df)

final_df = pd.concat(dfs)
final_df.to_csv('Desktop/scrape.csv')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM