I have two csv files, which contains two columns for each one.
file1.csv
C2-C1 1.5183
C3-C2 1.49
C3-C1 1.4991
O4-C3 1.4104
C1-C2-C3 59.78
file2.csv
C2-C1 1.5052
C3-C2 1.505
C3-C1 1.5037
S4-C3 1.7976
C1-C2-C3 59.95
I print in the output file three column: column-1: the similar lines, and then, the different lines
column-2 and column-3: values of the second column in file1.csv and file2.csv, respectively.
desired output.csv
C2-C1 1.5183 1.5052
C3-C2 1.49 1.505
C3-C1 1.4991 1.5037
C1-C2-C3 59.78 59.95
O4-C3 1.4104 -
S4-C3 - 1.7976
I tried with "itertools", I did not find a suitable format for the differences lines.
import itertools
files = ['1.csv', '2.csv']
d = {}
for fi, f in enumerate(files):
fh = open(f)
for line in fh:
sl = line.split()
name = sl[0]
val = float(sl[1])
if name not in d:
d[name] = {}
if fi not in d[name]:
d[name][fi] = []
d[name][fi].append(val)
fh.close()
for name, vals in d.items():
if len(vals) == len(files):
for var in itertools.product(*vals.values()):
if max(var) - min(var) <= 20:
out1 = '{}\t{}'.format(name, "\t".join(map(str, var)))
print(out1)
break
for name, vals in d.items():
if len(vals) != len(files):
for var in itertools.product(*vals.values()):
if max(var) - min(var) <= 20:
out2 = '{}\t{}'.format(name, "\t".join(map(str, var)))
print(out2)
break
my output:
C2-C1 1.5183 1.5052
C3-C2 1.49 1.505
C3-C1 1.4991 1.5037
C1-C2-C3 59.78 59.95
O4-C3 1.4104
S4-C3 1.7976
Following awk
may help you on same, this could take care of duplicate items too in Input_file(s).
awk '
FNR==NR{
a[$1]=$2;
next}
NF{
printf("%s %s %s\n",$1,$1 in a?a[$1]:"-",$2);
b[$1]=$1 in a?$1:""
}
END{
for(i in a){
if(!b[i] || b[i]==""){ print i,a[i],"-" }}
}' file1.csv file2.csv | column -t
A pure Python solution, and can work with as many files as needed (it will add a new column for each file and sort based on number of files sharing the same first column value). As a bonus, it actually uses proper CSV parsing so it could handle multiple CSV formats with little to no alteration:
import csv
files = ["1.csv", "2.csv"] # as many files as you want
results = [] # a store for our final result
line_map = {} # store a map for a quick update lookup
for i, f in enumerate(files): # enumerate the file list and iterate over it
with open(f, newline="") as f_in: # open(f, "rb") on Python 2.x
reader = csv.reader(f_in, delimiter=" ") # proper CSV reader, assumed space delimiter
for row in reader: # iterate over the current CSV line by line
row_id = row[0] # extract the first column for easier access
if row_id not in line_map: # a column value encountered for the first time...
line_map[row_id] = [row_id] + ["-"] * len(files) # create a placeholder list
results.append(line_map[row_id]) # add the value on its own column
line_map[row[0]][i+1] = row[1] # save the value in its place in the results list
# now we need to bracket the results in order of number of values before writing
# the easiest way is to just sort based on the amount of blank spaces
results = sorted(results, key=lambda x: x.count("-"))
Now, if you just want to print it:
for r in results:
print("\t".join(r))
# C2-C1 1.5183 1.5052
# C3-C2 1.49 1.505
# C3-C1 1.4991 1.5037
# C1-C2-C3 59.78 59.95
# O4-C3 1.4104 -
# S4-C3 - 1.7976
Or if you want to actually save it to a properly formatted CSV file:
with open("output.csv", "w", newline="") as f: # open(f, "wb") on Python 2.x
writer = csv.writer(f, delimiter="\t") # a proper CSV writer, tab used as a delimiter
writer.writerows(results)
GNU awk solution using 2d arrays, ARGIND
and column -t
for pretty printing. It supports more than two files:
$ awk '
{ a[$1][ARGIND]=$2 } # hash to 2d array
END {
for(i in a) { # iterate all a
printf "%s",i # output key
for(j=1;j<=ARGIND;j++) # iterate all data in a
printf "%s%s", OFS, (a[i][j]==""?"-":a[i][j]) # output
print "" # finish with a newline
}
}' file1 file2 file1 file2 | column -t # pretty print
C1-C2-C3 59.78 59.95 59.78 59.95
O4-C3 1.4104 - 1.4104 -
S4-C3 - 1.7976 - 1.7976
C3-C1 1.4991 1.5037 1.4991 1.5037
C3-C2 1.49 1.505 1.49 1.505
C2-C1 1.5183 1.5052 1.5183 1.5052
$ cat tst.awk
NR==FNR {
file2[$1] = $2
next
}
{
print $0, ($1 in file2 ? file2[$1] : "-")
delete file2[$1]
}
END {
for (key in file2) {
print key, "-", file2[key]
}
}
$ awk -f tst.awk file2.csv file1.csv | column -t
C2-C1 1.5183 1.5052
C3-C2 1.49 1.505
C3-C1 1.4991 1.5037
O4-C3 1.4104 -
C1-C2-C3 59.78 59.95
S4-C3 - 1.7976
Awk
solution:
awk 'NR == FNR{ a[$1] = $2; next }
{
if ($1 in a) { print $1, $2, a[$1]; delete a[$1] }
else a[$1] = $2 OFS "-"
}
END{
for (i in a) print i, (a[i] ~ /-$/ ? a[i] : "-" OFS a[i])
}' file2.csv file1.csv | column -t
The output:
C2-C1 1.5183 1.5052
C3-C2 1.49 1.505
C3-C1 1.4991 1.5037
C1-C2-C3 59.78 59.95
O4-C3 1.4104 -
S4-C3 - 1.7976
If you don't mind using pandas it'll make the life much easier:
import pandas as pd
df1=pd.DataFrame({'num01':[1.5183,1.49,1.4991,1.4104,59.78]},
index=['C2-C1','C3-C2','C3-C1','O4-C3','C1-C2-C3'])
df2=pd.DataFrame({'num02':[1.5183,1.49,1.4991,1.4104,59.78]},
index=['C2-C1','C3-C2','C3-C1','S4-C3','C1-C2-C3'])
df=pd.concat([df1,df2],axis=1).replace('nan','-')
You can read your csvs into pandas easily and don't have to deal with awk codes.
index num01 num02
C1-C2-C3 59.78 59.78
C2-C1 1.5183 1.5183
C3-C1 1.4991 1.4991
C3-C2 1.49 1.49
O4-C3 1.4104 -
S4-C3 - 1.4104
A Python defaultdict
could do the trick, provided the default value is a list of n values:
files = ['1.csv', '2.csv']
d = collections.defaultdict(lambda x: ['-'] * len(files))
for fi, f in enumerate(files):
with open(f) as fd:
for line in fh:
sl = line.split()
name = sl[0]
val = float(sl[1])
d[name][fi] = val
fmt = "{:<12}" + "{:<12}" * len(files)
for k, val in d.items():
print(fmt.format(k, *val))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.