简体   繁体   中英

how to read mutliple csv files and store them in different dataframe?

Say I have 200 csv files, I want to read these csv files at one time, and store each csv file in different data frames like df1 for the first file and so on up to df200. Doing manual like df1=pd.read_csv takes a lot of time up to 200. How do I do this using pandas?

I have tried using for loop, but unable to approach, stuck.

import pandas as pd
import glob

all_files = glob.glob("file_path" + "/*.csv")

dfs_dict = {}

for idx, filename in enumerate(all_files):
    df = pd.read_csv(filename, index_col=None, header=0)
    dfs_dict["df" + str(idx)] = df

Try using this :

import pandas as pd
import glob

path = r'path of the folder where all csv exists' 
all_files = glob.glob(path + "/*.csv")

li = []

for filename in all_files:
    df = pd.read_csv(filename, index_col=None, header=0)
    li.append(df)

li will have all the csv's... you can furthur preprocess them to separate them into different files,

or if all the csv's have the same column and you want to concatenate them to a single dataframe, you could use the concat function in pandas over li to return the single dataframe.

import pandas as pd
import os

dfs=[] #empty list of dataframes

dirname = #where your files are

for root,folders,files in os.walk(dirname):
    for file in files:
        fp = os.path.join(root,file)
        df=pd.read_csv(fp)
        dfs.append(df)

df=pd.concat(dfs) 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM