简体   繁体   English

熊猫:计算一组的时间间隔交叉点

[英]Pandas: Count time interval intersections over a group by

I have a dataframe of the following form 我有一个以下形式的数据框

import pandas as pd

Out[1]:
df = pd.DataFrame({'id':[1,2,3,4,5],
          'group':['A','A','A','B','B'],
          'start':['2012-08-19','2012-08-22','2013-08-19','2012-08-19','2013-08-19'],
          'end':['2012-08-28','2013-09-13','2013-08-19','2012-12-19','2014-08-19']})

     id group       start         end
0   1     A  2012-08-19  2012-08-28
1   2     A  2012-08-22  2013-09-13
2   3     A  2013-08-19  2013-08-21
3   4     B  2012-08-19  2012-12-19
4   5     B  2013-08-19  2014-08-19

For given row in my dataframe I'd like to count the number of items in the same group that have an overlapping time interval. 对于我的数据帧中的给定行,我想计算同一组中具有重叠时间间隔的项目数。

For example in group A id 2 ranges from 22 August 2012 to 13 Sept 2013 and hence the overlap between id 1 (19 August 2012 to 28 August 2012) and also id 3 (19 August 2013 to 21 August 2013) for a count of 2. 例如,在A组中,id 2的范围从2012年8月22日到2013年9月13日,因此id 1(2012年8月19日至2012年8月28日)与id 3(2013年8月19日至2013年8月21日)之间的重叠数为2 。

Conversely there is no overlap between the items in group B 相反,B组中的项目之间没有重叠

So for my example dataframe above i'd like to produce something like 所以对于我上面的示例数据框,我想生成类似的东西

Out[2]:
   id group       start         end  count
0   1     A  2012-08-19  2012-08-28      1
1   2     A  2012-08-22  2013-09-13      2
2   3     A  2013-08-19  2013-08-21      1
3   4     B  2012-08-19  2012-12-19      0
4   5     B  2013-08-19  2014-08-19      0

I could "brute-force" this but I'd like to know if there is a more efficient Pandas way of getting this done. 我可以“蛮力”这个,但我想知道是否有更高效的Pandas方法来完成这项工作。

Thanks in advance for your help 在此先感谢您的帮助

So, I would see how brute force fairs... if it's slow I'd cythonize this logic. 所以,我会看到蛮力博览会如何......如果它很慢,我会对这种逻辑进行cython化。 It's not so bad, as whilst O(M^2) in group size, if there's lots of small groups it might not be so bad. 这并不是那么糟糕,因为在组大小的情况下O(M ^ 2),如果有很多小组,那可能不会那么糟糕。

In [11]: def interval_overlaps(a, b):
    ...:     return min(a["end"], b["end"]) - max(a["start"], b["start"]) > np.timedelta64(-1)


In [12]: def count_overlaps(df1):
    ...:     return sum(interval_overlaps(df1.iloc[i], df1.iloc[j]) for i in range(len(df1) - 1) for j in range(i, len(df1)) if i < j)

In [13]: df.groupby("group").apply(count_overlaps)
Out[13]:
group
A    2
B    0
dtype: int64

The former is a tweaking of this interval overlap function . 前者是对这种区间重叠函数的调整。


Edit: Upon re-reading it looks like the count_overlaps is per-row, rather than per-group, so the agg function should be more like: 编辑:重新读取时,看起来count_overlaps是每行,而不是每组,因此agg函数应该更像:

In [21]: def count_overlaps(df1):
    ...:     return pd.Series([df1.apply(lambda x: interval_overlaps(x, df1.iloc[i]), axis=1).sum() - 1 for i in range(len(df1))], df1.index)

In [22]: df.groupby("group").apply(count_overlaps)
Out[22]:
group
A      0    1
       1    2
       2    1
B      3    0
       4    0
dtype: int64

In [22]: df["count"] = df.groupby("group").apply(count_overlaps).values

In [23]: df
Out[23]:
         end group  id      start  count
0 2012-08-28     A   1 2012-08-19      1
1 2013-09-13     A   2 2012-08-22      2
2 2013-08-19     A   3 2013-08-19      1
3 2012-12-19     B   4 2012-08-19      0
4 2014-08-19     B   5 2013-08-19      0

"brute-force"ish but gets the job done: “蛮力”是的,但完成工作:

First converted the date strings to dates and then compared each row against the df with an apply. 首先将日期字符串转换为日期,然后使用apply将每行与df进行比较。

df.start = pd.to_datetime(df.start)
df.end = pd.to_datetime(df.end)

df['count'] = df.apply(lambda row: len(df[ ( ( (row.start <= df.start) & (df.start <= row.end) ) \
                                            | ( (df.start <= row.start) & (row.start <= df.end) ) )
                           & (row.id != df.id) & (row.group == df.group) ]),axis=1)
import datetime
def ol(a, b):
    l=[]
    for x in b:
        l.append(max(0, int(min(a[1], x[1]) - max(a[0], x[0])>=datetime.timedelta(minutes=0))))
    return sum(l)


df['New']=list(zip(df.start,df.end))
df['New2']=df.group.map(df.groupby('group').New.apply(list))
df.apply(lambda x : ol(x.New,x.New2),axis=1)-1

Out[495]: 
0    1
1    2
2    1
3    0
4    0
dtype: int64

Timings 计时

#My method 
df.apply(lambda x : ol(x.New,x.New2),axis=1)-1

100 loops, best of 3: 5.39 ms per loop

#@Andy's Method 
df.groupby("group").apply(count_overlaps)    
10 loops, best of 3: 23.5 ms per loop

#@Nathan's Method

df.apply(lambda row: len(df[ ( ( (row.start <= df.start) & (df.start <= row.end) ) \
                       | ( (df.start <= row.start) & (row.start <= df.end) ) )
                       & (row.id != df.id) & (row.group == df.group) ]),axis=1)

10 loops, best of 3: 25.8 ms per loop

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM