简体   繁体   English

根据Python中的文件名和后缀合并csv个文件

[英]Merge csv files based on file names and suffix in Python

First time poster and fairly new to Python here.第一次发布海报,对 Python 来说还很新。 I have a collection of +1,7000 csv files with 2 columns each.我收集了 +1,7000 csv 个文件,每个文件有 2 列。 The number and labels of the rows are the same in every file.每个文件中的行数和标签都相同。 The files are named with a specific format.这些文件以特定格式命名。 For example:例如:

  • Species_1_OrderA_1.csv Species_1_OrderA_1.csv
  • Species_1_OrderA_2.csv Species_1_OrderA_2.csv
  • Species_1_OrderA_3.csv Species_1_OrderA_3.csv
  • Species_10_OrderB_1.csv物种_10_OrderB_1.csv
  • Species_10_OrderB_2.csv物种_10_OrderB_2.csv

Each imported dataframe is formatted like so:每个导入的 dataframe 格式如下:

               TreeID  Species_1_OrderA_2
0       Bu2_1201_1992                       0
1       Bu3_1201_1998                       0
2       Bu4_1201_2000                       0
3       Bu5_1201_2002                       0
4       Bu6_1201_2004                       0
..                ...                     ...
307  Fi141_16101_2004                       0
308  Fi142_16101_2006                       0
309  Fi143_16101_2008                       0
310  Fi144_16101_2010                       0
311  Fi147_16101_2015                       0

I would like to join the files that correspond to the same species, based on the first column.我想根据第一列加入对应于同一物种的文件。 So, in the end, I would get the files Species_1_OrderA.csv and Species_10_OrderB.csv.所以,最后,我会得到文件 Species_1_OrderA.csv 和 Species_10_OrderB.csv。 Please note that all the species do not necessarily have the same number of files.请注意,所有物种不一定具有相同数量的文件。

This is what I have tried so far.到目前为止,这是我尝试过的。

import os
import glob
import pandas as pd

# Importing csv files from directory
path = '.'
extension = 'csv'
os.chdir(path)
files = glob.glob('*.{}'.format(extension))

# Create a dictionary to loop through each file to read its contents and create a dataframe
file_dict = {}

for file in files:
    key = file
    df = pd.read_csv(file)

    file_dict[key] = df

# Extract the name of each dataframe, convert to a list and extract the relevant
# information (before the 3rd underscore). Compare each of these values to the next and
# if they are the same, append them to a list. This list (in my head, at least) will help
# me merge them using pandas.concat

keys_list = list(file_dict.keys())
group = ''

for line in keys_list:
    type = "_".join(line.split("_")[:3])
    for i in range(len(type) - 1):
        if type[i] == type[i+1]:
            group.append(line[keys_list])
            print(group)

However, the last bit is not even working, and at this point, I am not sure this is the best way to deal with my problem.但是,最后一点甚至不起作用,在这一点上,我不确定这是处理我的问题的最佳方法。 Any pointers on how to solve this will be really appreciated.非常感谢有关如何解决此问题的任何指示。

--- EDIT: This is the expected output for the files per species. ---编辑:这是每个物种文件的预期 output。 Ideally, I would remove the rows that have zeros in them, but that can easily be done with awk.理想情况下,我会删除其中包含零的行,但这可以使用 awk 轻松完成。

TreeID,Species_1_OrderA_0,Species_1_OrderA_1,Species_1_OrderA_2
Bu2_1201_1992,0,0,0
Bu3_1201_1998,0,0,0
Bu4_1201_2000,0,0,0
Bu5_1201_2002,0,0,0
Bu6_1201_2004,0,0,0
Bu7_1201_2006,0,0,0
Bu8_1201_2008,0,0,0
Bu9_1201_2010,0,0,0
Bu10_1201_2012,0,0,0
Bu11_1201_2014,0,0,0
Bu14_1201_2016,0,0,0
Bu16_1201_2018,0,0,0
Bu18_3103_1989,0,0,0
Bu22_3103_1999,0,0,0
Bu23_3103_2001,0,0,0
Bu24_3103_2003,0,0,0
...
Fi141_16101_2004,0,0,10
Fi142_16101_2006,0,4,0
Fi143_16101_2008,0,0,0
Fi144_16101_2010,2,0,0
Fi147_16101_2015,0,7,0
``

If your goal is to concatenate all the csv's for each species-order into a consolidated csv, this is one approach.如果您的目标是将每个物种订单的所有 csv 连接成一个合并的 csv,这是一种方法。 I haven't tested it so there might be a few errors.我还没有测试过,所以可能会有一些错误。 The idea is to first use glob, as you're doing, to make a dict of file_paths so that all the file_paths of the same species-order are grouped together.这个想法是首先使用 glob,就像你正在做的那样,对 file_paths 进行字典,以便将相同物种顺序的所有 file_paths 组合在一起。 Then for each species-order read in all the data into a single table in memory and then write out to a consolidated file.然后对于每个物种订单,将所有数据读入 memory 中的单个表,然后写出到一个合并文件。

import pandas as pd
import glob
   
#Create a dictionary keyed by species_order, valued by a list of files
#i.e. file_paths_by_species_order['Species_10_OrderB'] = ['Species_10_OrderB_1.csv', 'Species_10_OrderB_2.csv']
   
file_paths_by_species_order = {}
   
for file_path in glob.glob('*.csv'):
    species_order = file_path.split("_")[:3]
    if species_order not in file_paths_by_species_order:
        file_paths_by_species_order[species_order] = [file_path]
    else:   
        file_paths_by_species_order[species_order].append(file_path)
   
#For each species_order, concat all files and save the info into a new csv 
for species_order,file_paths in file_paths_by_species_order.items():
    df = pd.concat(pd.read_csv(file_path) for file_path in file_paths)
    df.to_csv('consolidated_{}.csv'.format(species_order))

There are definitely improvements that can be made such as using collections.defaultdict and writing one file at a time out to the consolidated file, instead of reading them all into memory肯定可以进行改进,例如使用 collections.defaultdict 并一次将一个文件写入合并文件,而不是将它们全部读入 memory

Try it like this:像这样尝试:

import os
import pandas as pd

path = "C:/Users/username"
files = [file for file in os.listdir(path) if file.endswith(".csv")]

dfs = dict()
for file in files:
    #everything before the final _ is the species name
    species = file.rsplit("_", maxsplit=1)[0]

    #read the csv to a dataframe
    df = pd.read_csv(os.path.join(path, file))
    
    #if you don't have a df for a species, create a new key
    if species not in dfs:
        dfs[species] = df
    #else, merge current df to existing df on the TreeID
    else:
        dfs[species] = pd.merge(dfs[species], df, on="TreeID", how="outer")

#write all dfs to their own csv files
for key in dfs:
    dfs[key].to_csv(f"{key}.csv")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM