def StatsUnion(filename1, filename2, filename3):
with open(filename1) as inputfile, open(filename3, 'w', newline='') as outputfile:
writer = csv.writer(outputfile)
for row in csv.reader(inputfile):
if any(field.strip() for field in row):
writer.writerow(row)
with open(filename2) as inputfile, open(filename3, 'a', newline='') as outputfile:
writer = csv.writer(outputfile)
for row in csv.reader(inputfile):
if any(field.strip() for field in row):
writer.writerow(row)
Here my function that works for merging 2 CSV file in a new one. Is there a way to make it for more CSV files in an easy way? Columns would be always the same
You can take advantage of var-args ( *args
) and run that code in a loop for any number of input files:
def stats_union(out_file, *args):
with open(out_file, 'w', newline='') as outputfile:
writer = csv.writer(outputfile)
for in_file in args:
with open(in_file) as inputfile:
for row in csv.reader(inputfile):
if any(field.strip() for field in row):
writer.writerow(row)
Now you can call it with any number of input files, only difference is that the output file should always be first. So your example would be:
stats_union(filename3, filename1, filename2)
You can do that simply using pandas , here's an example
def StatsUnion(out_file, *args):
ip = []
for i in args:
ip.append(pd.read_csv(i)) #read csv at path i in args, and store dataframe in a list
out_df = pd.concat(ip, axis=0) # concatenate all dataframes in list along the rows (axis = 1) for columns
out_df.to_csv(out_file, index=False)
Here, I am reading csv files from a path provided in args (args has paths for inividual files) and then concatenating them.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.