简体   繁体   English

有没有比使用循环和iloc更快的方式在小熊猫数据帧上进行全行比较?

[英]Is there a faster way of doing full row comparisons on a small pandas dataframe than using loops and iloc?

I have a large number of small pandas dataframes on which I have to do full row comparisons and write the results into new dataframes which will get concatenated later. 我有大量的小熊猫数据框,必须对它们进行完整的行比较,并将结果写入新的数据框,稍后再进行连接。

For the row comparisons I'm doing a double loop over the length of the dataframe using iloc. 对于行比较,我正在使用iloc在数据帧​​的长度上进行双循环。 I don't know if there is a faster way, the way I'm doing it seems really slow: 我不知道是否有更快的方法,但我做的方法似乎真的很慢:

# -*- coding: utf-8 -*-
import pandas as pd
import time

def processFrames1(DF):
  LL = []
  for i in range(len(DF)):
    for j in range(len(DF)):
      if DF.iloc[i][0] != DF.iloc[j][0]:
        T = {u'T1':DF.iloc[i][0]}
        T[u'T2'] = DF.iloc[j][0]
        T[u'T3'] = 1
        if DF.iloc[i][2] > DF.iloc[j][2]:
          T[u'T4'] = 1
        elif DF.iloc[i][2] < DF.iloc[j][2]:
          T[u'T4'] = -1
        else:
          T[u'T4'] = 0
        if DF.iloc[i][1] < DF.iloc[j][1]:
          T[u'T5'] = 1
        else:
          T[u'T5'] = -1
        LL.append(T)
  return pd.DataFrame.from_dict(LL)

D = [{'A':'XA','B':1,'C':1.4}\
    ,{'A':'RT','B':2,'C':10}\
    ,{'A':'HO','B':3,'C':34}\
    ,{'A':'NJ','B':4,'C':0.41}\
    ,{'A':'WF','B':5,'C':114}\
    ,{'A':'DV','B':6,'C':74}\
    ,{'A':'KP','B':7,'C':2.4}]

P = pd.DataFrame.from_dict(D)
time0 = time.time()
for i in range(10):
  X = processFrames1(P)
print time.time()-time0
print X

Yielding the result: 产生结果:

0.836999893188
    T1  T2  T3  T4  T5
0   XA  RT   1  -1   1
1   XA  HO   1  -1   1
2   XA  NJ   1   1   1
3   XA  WF   1  -1   1
4   XA  DV   1  -1   1
5   XA  KP   1  -1   1
6   RT  XA   1   1  -1
7   RT  HO   1  -1   1
8   RT  NJ   1   1   1
9   RT  WF   1  -1   1
10  RT  DV   1  -1   1
11  RT  KP   1   1   1
12  HO  XA   1   1  -1
13  HO  RT   1   1  -1
14  HO  NJ   1   1   1
15  HO  WF   1  -1   1
16  HO  DV   1  -1   1
17  HO  KP   1   1   1
18  NJ  XA   1  -1  -1
19  NJ  RT   1  -1  -1
20  NJ  HO   1  -1  -1
21  NJ  WF   1  -1   1
22  NJ  DV   1  -1   1
23  NJ  KP   1  -1   1
24  WF  XA   1   1  -1
25  WF  RT   1   1  -1
26  WF  HO   1   1  -1
27  WF  NJ   1   1  -1
28  WF  DV   1   1   1
29  WF  KP   1   1   1
30  DV  XA   1   1  -1
31  DV  RT   1   1  -1
32  DV  HO   1   1  -1
33  DV  NJ   1   1  -1
34  DV  WF   1  -1  -1
35  DV  KP   1   1   1
36  KP  XA   1   1  -1
37  KP  RT   1  -1  -1
38  KP  HO   1  -1  -1
39  KP  NJ   1   1  -1
40  KP  WF   1  -1  -1
41  KP  DV   1  -1  -1

Working this representative dataframe just 10 times takes almost a full second, and I will have to work with over a million. 仅将这个代表性数据帧工作10次几乎要花整整一秒钟的时间,而我将不得不处理超过一百万次。

Is there a faster way to do those full row comparisons? 有没有更快的方法来进行这些完整的行比较?

EDIT1: After some modifications I could make Javier's code create the correct output: EDIT1:经过一些修改后,我可以使Javier的代码创建正确的输出:

def compare_values1(x,y):
  if x>y: return 1
  elif x<y: return -1
  else: return 0

def compare_values2(x,y):
  if x<y: return 1
  elif x>y: return -1
  else: return 0

def processFrames(P):
  D = P.to_dict(orient='records')
  d_A2B = {d["A"]:d["B"] for d in D}
  d_A2C = {d["A"]:d["C"] for d in D}
  keys = list(d_A2B.keys())
  LL = []
  for i in range(len(keys)):
    k_i = keys[i]
    for j in range(len(keys)):
      if i != j:
        k_j = keys[j]
        LL.append([k_i,k_j,1,compare_values1(\
         d_A2C[k_i],d_A2C[k_j]),compare_values2(d_A2B[k_i],d_A2B[k_j])])
  return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])

This function works about 60 times faster. 此功能的运行速度提高了约60倍。

EDIT2: Final verdict of the four possibilities: EDIT2:四种可能性的最终裁决:

=============== With the small dataframe: ===============小数据框:

My original function: 我原来的功能:

%timeit processFrames1(P)
10 loops, best of 3: 85.3 ms per loop

jezrael's solution: jezrael的解决方案:

%timeit processFrames2(P)
1 loop, best of 3: 286 ms per loop

Javier's modified code: 哈维尔的修改后的代码:

%timeit processFrames3(P)
1000 loops, best of 3: 1.24 ms per loop

Divakar's method: Divakar的方法:

%timeit processFrames4(P)
1000 loops, best of 3: 1.98 ms per loop

=============== For the large dataframe: ===============对于大型数据框:

My original function: 我原来的功能:

%timeit processFrames1(P)
1 loop, best of 3: 2.22 s per loop

jezrael's solution: jezrael的解决方案:

%timeit processFrames2(P)
1 loop, best of 3: 295 ms per loop

Javier's modified code: 哈维尔的修改后的代码:

%timeit processFrames3(P)
100 loops, best of 3: 3.13 ms per loop

Divakar's method: Divakar的方法:

%timeit processFrames4(P)
100 loops, best of 3: 2.19 ms per loop

So it's pretty much a tie between the last two. 因此,最后两个之间几乎是一个纽带。 Thanks to everyone for helping, that speedup was much needed. 多亏了每个人的帮助,急需加快速度。

EDIT 3: 编辑3:

Divakar has edited their code and this is the new result: Divakar编辑了他们的代码,这是新结果:

Small dataframe: 小数据框:

%timeit processFrames(P)
1000 loops, best of 3: 492 µs per loop

Large dataframe: 大数据框:

%timeit processFrames(P)
1000 loops, best of 3: 844 µs per loop

Very impressive and the absolute winner. 非常令人印象深刻,绝对的赢家。

EDIT 4: 编辑4:

Divakar's method slightly modified as I am using it in my program now: 由于我现在在程序中使用Divakar的方法,因此对其做了一些修改:

def processFrames(P):
  N = len(P)
  N_range = np.arange(N)
  valid_mask = (N_range[:,None] != N_range).ravel()
  colB = P.B.values
  colC = P.C.values
  T2_arr = np.ones(N*N,dtype=int)
  T4_arr = np.zeros((N,N),dtype=int)
  T4_arr[colC[:,None] > colC] = 1
  T4_arr[colC[:,None] < colC] = -1
  T5_arr = np.zeros((N,N),dtype=int)
  T5_arr[colB[:,None] > colB] = -1
  T5_arr[colB[:,None] < colB] = 1
  strings = P.A.values
  c0,c1 = np.meshgrid(strings,strings)
  arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
                         T5_arr.ravel()))[valid_mask]
  return arr[:,0],arr[:,1],arr[:,2],arr[:,3],arr[:,4]

I'm creating a dictionary with five keys containing a list each which represent the five resulting columns, then I just extend the lists with the results, and once I'm done I'm making a pandas dataframe from the dictionary. 我正在创建一个包含五个键的字典,每个键包含一个代表五个结果列的列表,然后我将列表扩展为结果,一旦完成,就从词典中创建一个熊猫数据框。 That's a much faster way than to concatenate to an existing dataframe. 这是比连接到现有数据框快得多的方法。

PS: The one thing I learned from this: Never use iloc if you can avoid it in any way. PS:我从中学到的一件事:如果可以以任何方式避免使用iloc,请不要使用iloc。

Here's an approach using NumPy broadcasting - 这是使用NumPy broadcasting的一种方法-

def processFrames1_broadcasting(P):

    N = len(P)
    N_range = np.arange(N)
    valid_mask = (N_range[:,None] != N_range).ravel()

    colB = P.B.values
    colC = P.C.values

    T2_arr = np.ones(N*N,dtype=int)

    T4_arr = np.zeros((N,N),dtype=int)
    T4_arr[colC[:,None] > colC] = 1
    T4_arr[colC[:,None] < colC] = -1

    T5_arr = np.where(colB[:,None] < colB,1,-1)

    strings = P.A.values
    c0,c1 = np.meshgrid(strings,strings)


    arr = np.column_stack((c1.ravel(), c0.ravel(), T2_arr,T4_arr.ravel(),\
                            T5_arr.ravel()))[valid_mask]

    df = pd.DataFrame(arr, columns=[['T1','T2','T3','T4','T5']])

    return df

Runtime test - 运行时测试-

For the sample posted in the question, the runtimes I got at my end are - 对于问题中发布的示例,我最后得到的运行时是-

In [337]: %timeit processFrames1(P)
10 loops, best of 3: 93.1 ms per loop

In [338]: %timeit processFrames1_jezrael(P) #@jezrael's soln
10 loops, best of 3: 74.8 ms per loop

In [339]: %timeit processFrames1_broadcasting(P)
1000 loops, best of 3: 561 µs per loop

Don't use pandas. 不要使用熊猫。 Use dictionaries and save it: 使用字典并保存:

def compare_values(x,y):
  if x>y: return 1
  elif x<y: return -1
  else: return 0

def processFrames(P):
  d_A2B = dict(zip(P["A"],P["B"]))
  d_A2C = dict(zip(P["A"],P["C"]))

  keys = list(d_A2B.keys())
  d_ind2key = dict(zip(range(len(keys)),keys))
  LL = []
  for i in range(len(keys)):
    k_i = keys[i]
    for j in range(i+1,len(keys)):
        k_j = keys[j]
        c1 = compare_values(d_A2C[k_i],d_A2C[k_j])
        c2 = -compare_values(d_A2B[k_i],d_A2B[k_j])
        LL.append([k_i,k_j,1,c1,c2])
        LL.append([k_j,k_i,1,-c1,-c2])
  return pd.DataFrame(LL,columns=['T1','T2','T3','T4','T5'])

You can use: 您可以使用:

#cross join
P['one'] = 1
df = pd.merge(P,P, on='one')
df = df.rename(columns={'A_x':'T1','A_y':'T2'})

#remove duplicates
df = df[df.T1 != df.T2]
df.reset_index(drop=True, inplace=True)

#creates new columns
df['T3'] = 1
df['T4'] = (df.C_x > df.C_y).astype(int).replace({0:-1})
df['T5'] = (df.B_x < df.B_y).astype(int).replace({0:-1})
#remove other columns by subset
df = df[['T1','T2','T3','T4','T5']]
print (df)
    T1  T2  T3  T4  T5
0   XA  RT   1  -1   1
1   XA  HO   1  -1   1
2   XA  NJ   1   1   1
3   XA  WF   1  -1   1
4   XA  DV   1  -1   1
5   XA  KP   1  -1   1
6   RT  XA   1   1  -1
7   RT  HO   1  -1   1
8   RT  NJ   1   1   1
9   RT  WF   1  -1   1
10  RT  DV   1  -1   1
11  RT  KP   1   1   1
12  HO  XA   1   1  -1
13  HO  RT   1   1  -1
14  HO  NJ   1   1   1
15  HO  WF   1  -1   1
16  HO  DV   1  -1   1
17  HO  KP   1   1   1
18  NJ  XA   1  -1  -1
19  NJ  RT   1  -1  -1
20  NJ  HO   1  -1  -1
21  NJ  WF   1  -1   1
22  NJ  DV   1  -1   1
23  NJ  KP   1  -1   1
24  WF  XA   1   1  -1
25  WF  RT   1   1  -1
26  WF  HO   1   1  -1
27  WF  NJ   1   1  -1
28  WF  DV   1   1   1
29  WF  KP   1   1   1
30  DV  XA   1   1  -1
31  DV  RT   1   1  -1
32  DV  HO   1   1  -1
33  DV  NJ   1   1  -1
34  DV  WF   1  -1  -1
35  DV  KP   1   1   1
36  KP  XA   1   1  -1
37  KP  RT   1  -1  -1
38  KP  HO   1  -1  -1
39  KP  NJ   1   1  -1
40  KP  WF   1  -1  -1
41  KP  DV   1  -1  -1

TIMINGS : 时间

In [339]: %timeit processFrames1(P)
10 loops, best of 3: 44.2 ms per loop

In [340]: %timeit jez(P1)
10 loops, best of 3: 43.3 ms per loop

If use your timings: 如果使用您的时间安排:

time0 = time.time()
for i in range(10):
  X = processFrames1(P)
print (time.time()-time0)
0.4760475158691406

time0 = time.time()
for i in range(10):
  X = jez(P1)
print (time.time()-time0)
0.4400441646575928

Code for testing: 测试代码:

P1 = P.copy()

def jez(P):
    P['one'] = 1
    df = pd.merge(P,P, on='one')
    df = df.rename(columns={'A_x':'T1','A_y':'T2'})

    df = df[df.T1 != df.T2]
    df.reset_index(drop=True, inplace=True)
    df['T3'] = 1
    df['T4'] = (df.C_x > df.C_y).astype(int).replace({0:-1})
    df['T5'] = (df.B_x < df.B_y).astype(int).replace({0:-1})
    df = df[['T1','T2','T3','T4','T5']]
    return (df)

def processFrames1(DF):
  LL = []
  for i in range(len(DF)):
    for j in range(len(DF)):
      if DF.iloc[i][0] != DF.iloc[j][0]:
        T = {u'T1':DF.iloc[i][0]}
        T[u'T2'] = DF.iloc[j][0]
        T[u'T3'] = 1
        if DF.iloc[i][2] > DF.iloc[j][2]:
          T[u'T4'] = 1
        elif DF.iloc[i][2] < DF.iloc[j][2]:
          T[u'T4'] = -1
        else:
          T[u'T4'] = 0
        if DF.iloc[i][1] < DF.iloc[j][1]:
          T[u'T5'] = 1
        else:
          T[u'T5'] = -1
        LL.append(T)
  return pd.DataFrame.from_dict(LL)

EDIT1: 编辑1:

I try test in 5 times bigger dataFrame: 我尝试在更大的5倍dataFrame中进行测试:

D = [{'A':'XA','B':1,'C':1.4}\
    ,{'A':'RB','B':2,'C':10}\
    ,{'A':'HC','B':3,'C':34}\
    ,{'A':'ND','B':4,'C':0.41}\
    ,{'A':'WE','B':5,'C':114}\
    ,{'A':'DF','B':6,'C':74}\
    ,{'A':'KG','B':7,'C':2.4}\
    ,{'A':'XH','B':1,'C':1.4}\
    ,{'A':'RI','B':2,'C':10}\
    ,{'A':'HJ','B':3,'C':34}\
    ,{'A':'NK','B':4,'C':0.41}\
    ,{'A':'WL','B':5,'C':114}\
    ,{'A':'DM','B':6,'C':74}\
    ,{'A':'KN','B':7,'C':2.4}\
    ,{'A':'XO','B':1,'C':1.4}\
    ,{'A':'RP','B':2,'C':10}\
    ,{'A':'HQ','B':3,'C':34}\
    ,{'A':'NR','B':4,'C':0.41}\
    ,{'A':'WS','B':5,'C':114}\
    ,{'A':'DT','B':6,'C':74}\
    ,{'A':'KU','B':7,'C':2.4}\
    ,{'A':'XV','B':1,'C':1.4}\
    ,{'A':'RW','B':2,'C':10}\
    ,{'A':'HX','B':3,'C':34}\
    ,{'A':'NY','B':4,'C':0.41}\
    ,{'A':'WZ','B':5,'C':114}\
    ,{'A':'D1','B':6,'C':74}\
    ,{'A':'K2','B':7,'C':2.4}\
    ,{'A':'X3','B':1,'C':1.4}\
    ,{'A':'R4','B':2,'C':10}\
    ,{'A':'H5','B':3,'C':34}\
    ,{'A':'N6','B':4,'C':0.41}\
    ,{'A':'W7','B':5,'C':114}\
    ,{'A':'D8','B':6,'C':74}\
    ,{'A':'K9','B':7,'C':2.4}    ]

P = pd.DataFrame.from_dict(D)
P1 = P.copy()

time0 = time.time()
for i in range(10):
  X = processFrames1(P)
print (time.time()-time0)
12.230222940444946

time0 = time.time()
for i in range(10):
  X = jez(P1)
print (time.time()-time0)
0.4440445899963379
In [351]: %timeit processFrames1(P)
1 loop, best of 3: 1.21 s per loop

In [352]: %timeit jez(P1)
10 loops, best of 3: 43.7 ms per loop

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 与使用for循环,math.isnan()和df.iloc()相比,检查熊猫数据框中的某个单元格区域是否为NaN的方法更快? - Faster way to check a range of cells in a pandas dataframe for being NaN than using for loops, math.isnan(), and df.iloc()? 如何使用 iloc [] select pandas dataframe 的倒数第二行 []? - How to select second last row of a pandas dataframe using iloc[]? 除了 iloc(愿意使用 Dask)之外,是否有更快的方法将列分配给数据框(有条件) - Is there a faster way to assign a column to a dataframe (that has a condition) other than iloc (willing to use Dask) 使用带有分层索引的pandas数据帧中的iloc时出现问题 - Trouble with using iloc in pandas dataframe with hierarchical index 使用键在 Pandas 中重新组合数据框。 比遍历行更快的方法? - Regrouping dataframe in pandas using a key. Faster way than iterating over rows? 基于pandas数据帧切片设置特定列中的行值 - 使用loc和iloc - Setting value of row in certain column based on a slice of pandas dataframe - using both loc and iloc pandas比argsort更快的方式在数据帧子集中排名 - pandas faster way than argsort to rank in dataframe subset 寻找一种比for循环更快的方法来搜索和附加带有熊猫的DataFrame - Looking for a faster way than for-loop to search and append DataFrame with Pandas 制作 pandas 多索引 dataframe 比 append 更快的方法 - Faster way to make pandas Multiindex dataframe than append 带否定的 Pandas 数据框 iloc - Pandas dataframe iloc with negation
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM