简体   繁体   中英

Counting cumulative occurrences of values based on date window in Pandas

I have a DataFrame ( df ) that looks like the following:

+----------+----+
| dd_mm_yy | id |
+----------+----+
| 01-03-17 | A  |
| 01-03-17 | B  |
| 01-03-17 | C  |
| 01-05-17 | B  |
| 01-05-17 | D  |
| 01-07-17 | A  |
| 01-07-17 | D  |
| 01-08-17 | C  |
| 01-09-17 | B  |
| 01-09-17 | B  |
+----------+----+

This the end result i would like to compute:

+----------+----+-----------+
| dd_mm_yy | id | cum_count |
+----------+----+-----------+
| 01-03-17 | A  |         1 |
| 01-03-17 | B  |         1 |
| 01-03-17 | C  |         1 |
| 01-05-17 | B  |         2 |
| 01-05-17 | D  |         1 |
| 01-07-17 | A  |         2 |
| 01-07-17 | D  |         2 |
| 01-08-17 | C  |         1 |
| 01-09-17 | B  |         2 |
| 01-09-17 | B  |         3 |
+----------+----+-----------+

Logic

To calculate the cumulative occurrences of values in id but within a specified time window, for example 4 months . ie every 5th month the counter resets to one.

To get the cumulative occurences we can use this df.groupby('id').cumcount() + 1

Focusing on id = B we see that the 2nd occurence of B is after 2 months so the cum_count = 2 . The next occurence of B is at 01-09-17 , looking back 4 months we only find one other occurence so cum_count = 2 , etc.

My approach is to call a helper function from df.groupby('id').transform . I feel this is more complicated and slower than it could be, but it seems to work.

# test data

    date    id  cum_count_desired
2017-03-01  A   1
2017-03-01  B   1
2017-03-01  C   1
2017-05-01  B   2
2017-05-01  D   1
2017-07-01  A   2
2017-07-01  D   2
2017-08-01  C   1
2017-09-01  B   2
2017-09-01  B   3

# preprocessing

df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
# Encode the ID strings to numbers to have a column
# to work with after grouping by ID
df['id_code'] = pd.factorize(df['id'])[0]

# solution

def cumcounter(x):
    y = [x.loc[d - pd.DateOffset(months=4):d].count() for d in x.index]
    gr = x.groupby('date')
    adjust = gr.rank(method='first') - gr.size() 
    y += adjust
    return y

df['cum_count'] = df.groupby('id')['id_code'].transform(cumcounter)

# output

df[['id', 'id_num', 'cum_count_desired', 'cum_count']]

           id  id_num  cum_count_desired  cum_count
date                                               
2017-03-01  A       0                  1          1
2017-03-01  B       1                  1          1
2017-03-01  C       2                  1          1
2017-05-01  B       1                  2          2
2017-05-01  D       3                  1          1
2017-07-01  A       0                  2          2
2017-07-01  D       3                  2          2
2017-08-01  C       2                  1          1
2017-09-01  B       1                  2          2
2017-09-01  B       1                  3          3

The need for adjust

If the same ID occurs multiple times on the same day, the slicing approach that I use will overcount each of the same-day IDs, because the date-based slice immediately grabs all of the same-day values when the list comprehension encounters the date on which multiple IDs show up. Fix:

  1. Group the current DataFrame by date.
  2. Rank each row in each date group.
  3. Subtract from these ranks the total number of rows in each date group. This produces a date-indexed Series of ascending negative integers, ending at 0.
  4. Add these non-positive integer adjustments to y .

This only affects one row in the given test data -- the second-last row, because B appears twice on the same day.

Including or excluding the left endpoint of the time interval

To count rows as old as or newer than 4 calendar months ago, ie, to include the left endpoint of the 4-month time interval, leave this line unchanged:

y = [x.loc[d - pd.DateOffset(months=4):d].count() for d in x.index]

To count rows strictly newer than 4 calendar months ago, ie, to exclude the left endpoint of the 4-month time interval, use this instead:

y = [d.loc[d - pd.DateOffset(months=4, days=-1):d].count() for d in x.index]

You can extend the groupby with a grouper:

df['cum_count'] = df.groupby(['id', pd.Grouper(freq='4M', key='date')]).cumcount()

Out[48]: 
        date id  cum_count
0 2017-03-01  A          0
1 2017-03-01  B          0
2 2017-03-01  C          0
3 2017-05-01  B          0
4 2017-05-01  D          0
5 2017-07-01  A          0
6 2017-07-01  D          1
7 2017-08-01  C          0
8 2017-09-01  B          0
9 2017-09-01  B          1

We can make use of .apply row-wise to work on sliced df as well. Sliced will be based on the use of relativedelta from dateutil.

def get_cum_sum (slice, row):
    if slice.shape[0] == 0:
        return 1
    return slice[slice['id'] == row.id].shape[0]

d={'dd_mm_yy':['01-03-17','01-03-17','01-03-17','01-05-17','01-05-17','01-07-17','01-07-17','01-08-17','01-09-17','01-09-17'],'id':['A','B','C','B','D','A','D','C','B','B']}
df=pd.DataFrame(data=d)
df['dd_mm_yy'] = pd.to_datetime(df['dd_mm_yy'], format='%d-%m-%y')

df['cum_sum'] = df.apply(lambda current_row: get_cum_sum(df[(df.index <= current_row.name) & (df.dd_mm_yy >= (current_row.dd_mm_yy - relativedelta(months=+4)))],current_row),axis=1)

>>> df
    dd_mm_yy id  cum_sum
0 2017-03-01  A        1
1 2017-03-01  B        1
2 2017-03-01  C        1
3 2017-05-01  B        2
4 2017-05-01  D        1
5 2017-07-01  A        2
6 2017-07-01  D        2
7 2017-08-01  C        1
8 2017-09-01  B        2
9 2017-09-01  B        3

Thinking if it is feasible to use .rolling but months are not a fixed period thus might not work.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM