简体   繁体   English

在数千种条件下过滤 Pandas 数据框

[英]Filtering Pandas dataframe on thousands of conditions

I currently have a list of tuples that look like this:我目前有一个看起来像这样的元组列表:

time_constraints = [
    ('001', '01/01/2020 10:00 AM', '01/01/2020 11:00 AM'),
    ('001', '01/03/2020 05:00 AM', '01/03/2020 06:00 AM'),
    ...
    ('999', '01/07/2020 07:00 AM', '01/07/2020 08:00 AM')
]

where:在哪里:

  • each tuple contains an id , lower_bound , and upper_bound每个元组包含一个idlower_boundupper_bound
  • none of the time frames overlap for a given id对于给定的id没有任何时间框架重叠
  • len(time_constraints) can be on the order of 10^4 to 10^5. len(time_constraints)可以是 10^4 到 10^5 的数量级。

My goal is to quickly and efficiently filter a relatively large (millions of rows) Pandas dataframe ( df ) to include only the rows that match on the id column and fall between the specified lower_bound and upper_bound times (inclusive).我的目标是快速有效地过滤一个相对较大(数百万行)的 Pandas 数据帧( df ),以仅包含在id列上匹配并落在指定的lower_boundupper_bound时间(含)之间的行。

My current plan is to do this:我目前的计划是这样做:

import pandas as pd

output = []
for i, lower, upper in time_constraints:
    indices = list(df.loc[(df['id'] == i) & (df['timestamp'] >= lower) & (df['timestamp'] <= upper), ].index)
    output.extend(indices)

output_df = df.loc[df.index.isin(output), ].copy()

However, using a for-loop isn't ideal.但是,使用 for 循环并不理想。 I was wondering if there was a better solution (ideally vectorized) using Pandas or NumPy arrays that would be faster.我想知道是否有更好的解决方案(理想情况下矢量化)使用 Pandas 或 NumPy 数组会更快。

Edited:编辑:

Here's some sample rows of df :这是df的一些示例行:

id ID timestamp时间戳
1 1 01/01/2020 9:56 AM 01/01/2020 上午 9:56
1 1 01/01/2020 10:32 AM 01/01/2020 上午 10:32
1 1 01/01/2020 10:36 AM 01/01/2020 上午 10:36
2 2 01/01/2020 9:42 AM 01/01/2020 上午 9:42
2 2 01/01/2020 9:57 AM 01/01/2020 上午 9:57
2 2 01/01/2020 10:02 AM 01/01/2020 上午 10:02

I already answered for a similar case .我已经回答了一个类似的案例

To test, I used 100,000 constraints ( tc ) and 5,000,000 of records ( df ).为了测试,我使用了 100,000 个约束 ( tc ) 和 5,000,000 条记录 ( df )。 Is it what you expect是否如你所愿

>>> df
          id           timestamp
0        565 2020-08-16 05:40:55
1        477 2020-04-05 22:21:40
2        299 2020-02-22 04:54:34
3        108 2020-08-17 23:54:02
4        041 2020-09-10 10:01:31
...      ...                 ...
4999995  892 2020-12-27 16:16:35
4999996  373 2020-08-29 05:44:34
4999997  659 2020-05-23 20:48:15
4999998  858 2020-09-08 22:58:20
4999999  710 2020-04-10 08:03:14

[5000000 rows x 2 columns]


>>> tc
        id         lower_bound         upper_bound
0      000 2020-01-01 00:00:00 2020-01-04 14:00:00
1      000 2020-01-04 15:00:00 2020-01-08 05:00:00
2      000 2020-01-08 06:00:00 2020-01-11 20:00:00
3      000 2020-01-11 21:00:00 2020-01-15 11:00:00
4      000 2020-01-15 12:00:00 2020-01-19 02:00:00
...    ...                 ...                 ...
99995  999 2020-12-10 09:00:00 2020-12-13 23:00:00
99996  999 2020-12-14 00:00:00 2020-12-17 14:00:00
99997  999 2020-12-17 15:00:00 2020-12-21 05:00:00
99998  999 2020-12-21 06:00:00 2020-12-24 20:00:00
99999  999 2020-12-24 21:00:00 2020-12-28 11:00:00

[100000 rows x 3 columns]
# from tqdm import tqdm
from itertools import chain

# df = pd.DataFrame(data, columns=['id', 'timestamp'])
tc = pd.DataFrame(time_constraints, columns=['id', 'lower_bound', 'upper_bound'])
g1 = df.groupby('id')
g2 = tc.groupby('id')

indexes = []
# for id_ in tqdm(tc['id'].unique()):
for id_ in tc['id'].unique():
    df1 = g1.get_group(id_)
    df2 = g2.get_group(id_)

    ii = pd.IntervalIndex.from_tuples(list(zip(df2['lower_bound'], 
                                               df2['upper_bound'])),
                                      closed='both')
    indexes.append(pd.cut(df1['timestamp'], bins=ii).dropna().index)

out = df.loc[chain.from_iterable(indexes)]

Performance:表现:

100%|█████████████████████████████████████████████████| 1000/1000 [00:17<00:00, 58.40it/s]

Output result:输出结果:

>>> out
          id           timestamp
1326     000 2020-11-10 05:51:00
1685     000 2020-10-07 03:12:48
2151     000 2020-05-08 11:11:18
2246     000 2020-07-06 07:36:57
3995     000 2020-02-02 04:39:11
...      ...                 ...
4996406  999 2020-02-19 15:27:06
4996684  999 2020-02-05 11:13:56
4997408  999 2020-07-09 09:31:31
4997896  999 2020-04-10 03:26:13
4999674  999 2020-04-21 22:57:04

[4942976 rows x 2 columns]  # 57024 records filtered

You can use boolean indexing, likewise:您可以使用布尔索引,同样:

output_df = df[pd.Series(list(zip(df['id'], 
               df['lower_bound'],
               df['upper_bound']))).isin(time_constraints)]

The zip function is creating tuples from each column and then comparing it with your list of tuple. zip函数从每一列创建tuples ,然后将它与您的元组列表进行比较。 The pd.Series is used to create a Boolean series. pd.Series 用于创建布尔系列。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM