I have a pretty complex (to me) situation where I need to process a dataframe that has multiple rows for each index that can be one of three scenarios depending on the value of a certain column.
The dataframe looks like this:
Index Account Postfix ID val1 val2
AA11 AA 11 aa 1 2
AA11 AA 11 aa 1 2
AA11 AA 11 aa 1 2
BB22 BB 22 bb 1 1
BB22 BB 22 NA 2 2
BB22 BB 22 NA 3 3
CC33 CC 33 NA 1 2
CC33 CC 33 NA 1 2
CC33 CC 33 NA 1 2
Each unique index can fall into one of three scenarios:
My first problem is that I cannot figure out how to check the value of a column across multiple rows for the same index.
I was thinking something like:
indices = df.index.unique()
for index in indices:
df[ScenarioA] = np.all(df.loc[index, ID])
df[ScenarioN] = np.all(np.logical_not(df.loc[index, ID]))
df[ScenarioS] = np.logical_and(np.logical_not(df[ScenarioA]),np.logical_not(df[ScenarioN]))
But this is resulting in all rows getting tagged as ScenarioN when in actuality the result should look like this:
Index Account Postfix ID val1 val2 ScenarioA ScenarioS ScenarioN
AA11 AA 11 aa 1 2 True False False
AA11 AA 11 aa 1 2 True False False
AA11 AA 11 aa 1 2 True False False
BB22 BB 22 bb 1 1 False True False
BB22 BB 22 NA 2 2 False True False
BB22 BB 22 NA 3 3 False True False
CC33 CC 33 NA 1 2 False False True
CC33 CC 33 NA 1 2 False False True
CC33 CC 33 NA 1 2 False False True
Once I've done that I need to perform the sums and end up with something like the below but I don't think this part will be too difficult as I can go by Scenario and perform the calcs as needed:
Index Account Postfix ID val1 val2
AA11 AA 11 aa 1 2
BB22 BB 22 bb 1 5
CC33 CC 33 NA 3 6
What am I doing wrong in the part where I try to assign T/F to the Scenario columns?
not sure if this is what u r after, hopefully it can guide u on solving ur specific challenge:
grouping = df.groupby('Index').ID
#create some anonymous functions
#determine groups that completely have no null
#those that have some null
#those that have nulls all through
alls = lambda x: x.isna().all()
anys = lambda x: x.isna().any()
notnull = lambda x: x.notna().all()
all_null = grouping.apply(alls)
any_null = grouping.apply(anys)
all_not_null = grouping.apply(notnull)
#get the individual groups
full = all_not_null.index[all_not_null.array]
empty = all_null.index[all_null.array]
partially_empty = any_null.index[any_null.array].difference(empty)
#get the different dataframes for each group
step1 = df.loc[df.Index.isin(full)].groupby('Index').first()
#some nulls
cond1 = df.Index.isin(partially_empty) & (df.ID.notna())
cond2 = df.Index.isin(partially_empty) &(df.ID.isna())
step2 = df.loc[cond1]
step2 = step2.assign(val2 = df.loc[cond2,'val2'].sum())
#nulls all the way
step3 = df.loc[df.Index.isin(empty)]
temp = step3.groupby(['Index']).agg({'val1':'sum','val2':'sum'})
step3 = step3.drop_duplicates('Index')
step3 = step3.assign(val1 = temp['val1'].squeeze(), val2 = temp['val2'].squeeze())
#combine the three dataframes
pd.concat([step1.reset_index(),step2,step3],ignore_index=True)
Index Account Postfix ID val1 val2
0 AA11 AA 11 aa 1 2
1 BB22 BB 22 bb 1 5
2 CC33 CC 33 NaN 3 6
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.