简体   繁体   English

将大型 DataFrame 拆分为包含列中唯一值记录的 DataFrame

[英]Split large DataFrame into Dataframes containing records of unique values in a column

A csv file has 90 million rows.一个 csv 文件有 9000 万行。 One of the Columns in named "State".名为“状态”的列之一。 It has 12 unique values at present.目前它有12个独特的价值。 (The count of unique values in the "State" column is dynamic and can change with each csv file.) (“状态”列中唯一值的计数是动态的,并且可以随每个 csv 文件而变化。)

I want to split the DataFrame into smaller chunks and then save State-wise files.我想将 DataFrame 拆分成更小的块,然后保存 State-wise 文件。 The code below is not working.下面的代码不起作用。

source_path = "DataJune.txt"
for i,chunk in enumerate(pd.read_csv(source_path, sep = '|',chunksize=1000000)):
    dfs = dict(tuple(chunk.groupby('State')))
    for i, df in dfs.items():
        df = df.append(df)
        df.to_csv("tempcsv/" + i +".csv",sep=",", index = False)

IIUC, Try: IIUC,尝试:

source_path = "DataJune.txt"

from collections import defaultdict

def def_value():
    return pd.DataFrame()
      
# Defining the dict
d = defaultdict(def_value)

for i,chunk in enumerate(pd.read_csv(source_path, sep = '|',chunksize=2)):
    chunk_states = chunk['State'].unique()
    for state in chunk_states:
        d[state]=d[state].append(chunk[chunk['State']==state])
for i, df in d.items():
    df.to_csv("tempcsv/" + str(i) +".csv",sep=",", index = False)

Another version, based on the @Corralien comment:另一个版本,基于@Corralien 评论:

source_path = "DataJune.txt"

for i,chunk in enumerate(pd.read_csv(source_path, sep = '|',chunksize=2)):
    chunk_states = chunk['State'].unique()
    
    for state in chunk_states:
        with open("tempcsv/" + str(state) +".csv",mode='a+') as file:
            for i, row in chunk[chunk['State']==state].iterrows():
                file.write(','.join([str(x) for x in row]))
                file.write('\n')

Another version:另一个版本:

source_path = "DataJune.txt"
from os.path import exists
import csv

for i,chunk in enumerate(pd.read_csv(source_path, sep = '|',chunksize=2)):
    chunk_states = chunk['State'].unique()
    
    for state in chunk_states:
        path = "tempcsv/" + str(state) +".csv"
        file_exists = exists(path)
        if not file_exists:
            with open(path,newline='',mode='a+') as file:
                writer = csv.writer(file)
                writer.writerow(chunk.columns)
                print(chunk.columns)
        with open(path,newline='',mode='a+') as file:
            writer = csv.writer(file)
            writer.writerows(chunk[chunk['State']==state].values)

You can use:您可以使用:

import pandas as pd
import os

source_path = 'DataJune.txt'
fps = {}

for chunk in pd.read_csv(source_path, sep='|', chunksize=1000000, dtype=object):
    for state, df in chunk.groupby('State'):
        # New state, create a new file and write headers
        if state not in fps:
            fps[state] = open(f'tempcsv/{state}.csv', 'w')
            fps[state].write(f"{','.join(df.columns)}{os.linesep}")

        # Write data without headers
        df.to_csv(fps[state], index=False, header=False)

# Close files properly
for fp in fps.values():
    fp.close()
del fps

Update更新

Try to replace:尝试替换:

# Write data without headers
df.to_csv(fps[state], index=False, header=False)

By经过

# Write data without headers
g = (row.strip() for row in df.to_csv(index=False, header=None, sep=',').split(os.linesep) if row)
print(*g, sep=os.linesep, file=fps[state])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM