[英]Comparing rows of string inside groupby and assigning a value to a new column pandas
I have a dataset of employees (their IDs) and the names of their bosses for several years.我有一个员工数据集(他们的 ID)和他们老板的名字,这些数据已经有好几年了。
df:东风:
What I need to do is to see if an employee had a boss' change.我需要做的是看看员工是否有老板的变化。 So, desired output is:因此,所需的 output 是:
For employees who appear in the df only once, I just assign 0 (no boss' change).对于只出现在 df 中的员工,我只分配 0(没有老板的变化)。 However, I cannot figure out how to do it for the employees who are in the df for several years.但是,我不知道如何为在 df 工作了几年的员工做这件事。
I was thinking that first I need to assign 0 for the first year they appear in the df (because we do not know who was the boss before, therefore there is no boss' change).我在想首先我需要为他们出现在df中的第一年分配0(因为我们不知道之前谁是老板,因此没有老板的变化)。 Then I need to compare the name of the boss with the name in the next row and decide whether to assign 1 or 0 into the ManagerChange column.然后我需要将老板的名字与下一行的名字进行比较,并决定将 1 或 0 分配到 ManagerChange 列中。
So far I split the df into two (with unique IDs and duplicated IDs) and assigned 0 to ManagerChange for the unique IDs.到目前为止,我将 df 一分为二(具有唯一 ID 和重复 ID),并将 0 分配给 ManagerChange 以获得唯一 ID。
Then I groupby the duplicated IDs and sort them by year.然后我将重复的 ID 分组并按年份排序。 However, I am new to Python and cannot figure out how to compare strings and assign a result value to a new column inside the groupby.但是,我是 Python 的新手,无法弄清楚如何比较字符串并将结果值分配给 groupby 内的新列。 Please, help.请帮忙。
Code I have so far:我到目前为止的代码:
# splitting database in two
bool_series = df["ID"].duplicated(keep=False)
df_duplicated=df[bool_series]
df_unique = df[~bool_series]
# assigning 0 for ManagerChange for the unique IDs
df_unique['ManagerChange'] = 0
# groupby by ID and sorting by year for the duplicated IDs
df_duplicated.groupby('ID').apply(lambda x: x.sort_values('Year'))
You can groupby then shift()
the group and compare on Boss
columns.您可以 groupby 然后shift()
组并在Boss
列上进行比较。
# Sort value first
df.sort_values(['ID', 'Year'], inplace=True)
# Compare Boss column with shifted Boss column
df['ManagerChange'] = df.groupby('ID').apply(lambda group: group['Boss'] != group['Boss'].shift(1)).tolist()
# Change True to 1, False to 0
df['ManagerChange'] = df['ManagerChange'].map({True: 1, False: 0})
# Sort df to original df
df = df.sort_index()
# Change the first in each group to 0
df.loc[df.groupby('ID').head(1).index, 'ManagerChange'] = 0
# print(df)
ID Year Boss ManagerChange
0 1234 2018 Anna 0
1 567 2019 Sarah 0
2 1234 2020 Michael 0
3 8976 2019 John 0
4 1234 2019 Michael 1
5 8976 2020 John 0
You could also make use of fill_value
argument, this will help you get rid of the last df.loc[]
operation.您也可以使用fill_value
参数,这将帮助您摆脱最后的df.loc[]
操作。
# Sort value first
df.sort_values(['ID', 'Year'], inplace=True)
df['ManagerChange'] = df.groupby('ID').apply(lambda group: group['Boss'] != group['Boss'].shift(1, fill_value=group['Boss'].iloc[0])).tolist()
# Change True to 1, False to 0
df['ManagerChange'] = df['ManagerChange'].map({True: 1, False: 0})
# Sort df to original df
df = df.sort_index()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.