简体   繁体   中英

Python : Summary of statistics from multiple statistics files

I have dataset of about 140,000,000 records which I have stored in the database. I need to compute basic statistics such as mean, max , min, standard deviation on these data using python.

But when I do so using chunks something like "Select * from Mytable order by ID limit %d offset %d" % (chunksize,offset), the execution takes more than an hour and still executing. Referring from How to create a large pandas dataframe from an sql query without running out of memory?

Since it takes more time, Now I have decided to read only few records and save the statistics obtained using pandas.describe() into a csv. Likewise for the entire data I will have multiple csvs containing only the statistics.

Is there a way to merge these csvs to get the overall statistics for the entire data of 140,000,000 ?

在这种情况下(要计算在不同文件中分割的巨大数据集的平均值,最大值,最小值,标准差),您可以计算所需的内容(平均值,最大值等),仅保留结果,然后打开第二个文件并进行计算(平均值,最大值等)考虑到您第一个文件的结果等。

Have you tried using pickle? Save and load in pickle format, and use pandas data frame to calculate the summary statistics.

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_pickle.html

And if this don't work, then perhaps re-visit the objectives as to why need to capture such a large dataset and breakdown by categories, time period or something more meaningful.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM