[英]Creating a large database from many files with pandas
I have many files (~2,000,000) generated by another program that I need to extract data from. 我有很多文件(~2,000,000)由另一个程序生成,我需要从中提取数据。 These files have common indices with a different value for different methods, I am not sure how to phrase this well so here is a three dimensional example: 这些文件具有不同方法的不同值的公共索引,我不知道如何很好地表达这一点,所以这里是一个三维示例:
[x1,y1,z1,method1]
[x1,y1,z1,method2]
[x2,y2,z2,method1]
[x2,y2,z2,method2]
Ultimately what I would like to have is a pandas dataframe that looks something like this: 最终我想要的是一个像这样的pandas数据框:
x y z method1 method2 ... methodn
0 x1 y1 z1 data data data
1 x2 y2 z2 data data data
2 x3 y3 z3 NaN data data
3 x4 y4 z4 data NaN data
...
n xn yn zn data NaN NaN
There will be some holes in the method and the data is not aligned. 方法中会有一些漏洞,数据不对齐。
The following shows the pseudocode: 以下显示了伪代码:
file_list=glob.glob('/scratch/project/*')
method1_list=[]
method2_list=[]
...
methodn_list=[]
#Obtain data in the correct list
for outfile in file_list:
indices=(#function that obtains indices)
data=(#function that obtains primary data)
if method1: method1_list.append([indices,data])
elif method2: method2_list.append([indices,data])
...
else methodn: methodn_list.append([indices,data])
#Convert list to dataframe
method1_pd=pd.DataFrame(method1_list,columns[indices,method1])
method2_pd=pd.DataFrame(method2_list,columns[indices,method1])
...
methodn_pd=pd.DataFrame(methodn_list,columns[indices,method1])
#Apply multi index
method1=method1.set_index(indices)
method2=method2.set_index(indices)
...
methodn=methodn.set_index(indices)
#Combine data
out=method1.combine_first(method2)
out=out.combine_first(method3)
...
out=out.combine_first(methodn)
This works really well, however as the number of the methods is growing this is becoming fairly tedious to write and seems rather unpythonic. 这种方法非常有效,但随着方法数量的增加,这种方式变得相当繁琐,而且看起来相当单一。 So I have the following questions: 所以我有以下问题:
Something like this might work, though depending on how your data is actually constructed. 这样的事情可能有用,但取决于你的数据实际构建方式。 If you can provide a sample, might help. 如果您能提供样品,可能会有所帮助。 It assumes that your indices is known (or computed as you go) 它假设您的指数已知(或随时计算)
from collections import defaultdict
file_list = glob.glob('/scratch/project/*')
methods = defaultdict([])
for outfile in file_list:
#indices = (#function that obtains indices)
#data = (#function that obtains primary data)
methods[method].append([indices,data])
frames = [ DataFrame(method_list,columns[indices,method])
for method, method_list in methods.items() ]
# concat
combine_frame = pd.concat(frames,axis=1)
# set your combined index
result = combine_frame.set_index(indicies)
Perhaps concat every file/frame and create a pivot table from the final DataFrame? 也许连接每个文件/框架并从最终的DataFrame创建一个数据透视表?
df1 = pd.read_csv(StringIO("""\
x,y,z,data
x1,y1,z1,1
x2,y2,z2,1
"""), sep=',')
df2 = pd.read_csv(StringIO("""\
x,y,z,data
x1,y1,z1,2
x2,y2,z2,2
"""), sep=',')
df3 = pd.read_csv(StringIO("""\
x,y,z,data
x3,y2,z2,3
"""), sep=',')
df1['method'] = 'method1'
df2['method'] = 'method2'
df3['method'] = 'method3'
df = pd.concat([df1, df2, df3])
In [17]: df.pivot_table(rows=['x', 'y', 'z'], cols='method', values='data',
... aggfunc='first')
Out[17]:
method method1 method2 method3
x y z
x1 y1 z1 1 2 NaN
x2 y2 z2 1 2 NaN
x3 y2 z2 NaN NaN 3
In [18]: df
Out[18]:
x y z data method
0 x1 y1 z1 1 method1
1 x2 y2 z2 1 method1
0 x1 y1 z1 2 method2
1 x2 y2 z2 2 method2
0 x3 y2 z2 3 method3
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.