简体   繁体   中英

Pandas cumsum + cumcount on multiple columns

Aloha,

I have the following DataFrame

stores = [1,2,3,4,5]
weeks = [1,1,1,1,1]
df = pd.DataFrame({'Stores' : stores,
                  'Weeks' : weeks})

df = pd.concat([df]*53)
df['Weeks'] = df['Weeks'].add(df.groupby('Stores').cumcount())

df['Target'] = np.random.randint(400,600,size=len(df)) 
df['Actual'] = np.random.randint(350,800,size=len(df)) 
df['Variance %'] = (df['Target'] - df['Actual']) / df['Target']
df.loc[df['Variance %'] >= 0.01, 'Status'] = 'underTarget'
df.loc[df['Variance %'] <= 0.01, 'Status'] = 'overTarget'
df['Status'] = df['Status'].fillna('atTarget')

df.sort_values(['Stores','Weeks'],inplace=True)

this gives me the following

print(df.head())

    Stores  Weeks   Target  Actual  Variance %  Status
0   1   1   430 605 -0.406977   overTarget
0   1   2   549 701 -0.276867   overTarget
0   1   3   471 509 -0.080679   overTarget
0   1   4   549 378 0.311475    underTarget
0   1   5   569 708 -0.244288   overTarget
0   1   6   574 650 -0.132404   overTarget
0   1   7   466 623 -0.336910   overTarget

now what I'm trying to do is do a cumulative count of Stores where they were either over or undertarget but reset when the status changes.

I thought this would be the best way to do this (and many variants of this) but this does not reset the counter.

s = df.groupby(['Stores','Weeks','Status'])['Status'].shift().ne(df['Status'])
df['Count'] = s.groupby(df['Stores']).cumsum()

my logic was to group by my relevant columns, and do a != shift to reset the cumsum

Naturally I've scoured lots of different questions but I can't seem to figure this out. Would anyone be so kind to explain to me what would be the best method to tackle this problem?

I hope everything here is clear and reproducible. Please let me know if you need any additional information.

Expected Output

  Stores    Weeks   Target  Actual  Variance %  Status Count
0   1   1   430 605 -0.406977   overTarget             1
0   1   2   549 701 -0.276867   overTarget             2
0   1   3   471 509 -0.080679   overTarget             3
0   1   4   549 378 0.311475    underTarget            1  # Reset here as status changes
0   1   5   569 708 -0.244288   overTarget             1  # Reset again.
0   1   6   574 650 -0.132404   overTarget             2
0   1   7   466 623 -0.336910   overTarget             3

Try pd.Series.groupby() after create the key by cumsum

s=df.groupby('Stores')['Status'].apply(lambda x : x.ne(x.shift()).ne(0).cumsum())
df['Count']=df.groupby([df.Stores,s]).cumcount()+1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM