I have data that look like the following
Device Time Condition
D1 01/11/2019 00:00 issue
D1 01/11/2019 00:15 issue
D1 01/11/2019 00:30 issue
D1 01/11/2019 00:45 issue
D1 01/11/2019 01:00 issue
D1 01/11/2019 01:15 Resolved
D1 01/11/2019 01:30 Resolved
D2 01/11/2019 01:45 issue
D2 01/11/2019 02:00 Resolved
D1 01/11/2019 01:45 issue
D1 01/11/2019 02:00 Resolved
I need to create a new column that will find the time between the first issue and the first resolved. I need a groupby statement that will keep the first issue and the first resolved for all the issues. Then find the time - When I use group by Device and condition it just kept one issue per device.
The desired output is like the following
Device Time Condition durationTofix
D1 01/11/2019 00:00 issue
D1 01/11/2019 00:15 issue
D1 01/11/2019 00:30 issue
D1 01/11/2019 00:45 issue
D1 01/11/2019 01:00 issue
D1 01/11/2019 01:15 Resolved 01:15
D1 01/11/2019 01:30 Resolved
D2 01/11/2019 01:45 issue
D2 01/11/2019 02:00 Resolved 00:15
D1 01/11/2019 01:45 issue
D1 01/11/2019 02:00 Resolved 00:15
As groupby Device and Condition is not enough I thought to create an index column
data["index"] = data.groupby(['Device','condition']).??? #Something like cumcount() but it is not cumcount in this case
Then use pivot table for the time calculations
H = data.pivot_table(index=['index','Device'], columns=['condition'], values='Timestamp',aggfunc=lambda x: x)
H['durationTofix'] = H['Resolved']- H['issue']
Solution if there is always at least one issue before Resolved
per consecutive groups by Device
:
#converting to datetimes
df['Time'] = pd.to_datetime(df['Time'])
#consetutive groups
g = df['Device'].ne(df['Device'].shift()).cumsum()
#test issue values
m = df['Condition'].eq('issue')
#replace not issue to missing values
i = df['Time'].where(m)
#get first duplicated rows by consecutive groups and condition column
mask = ~df.assign(g=g,i=i).duplicated(['g','Condition'])
#forward filling Time by first issue per groups
s = df['Time'].where(mask & m).groupby(g).ffill()
#subtract and filter only first Resolved per groups
df['durationTofix'] = df['Time'].sub(s).where(mask & df['Condition'].eq('Resolved'))
print (df)
Device Time Condition durationTofix
0 D1 2019-01-11 00:00:00 issue NaT
1 D1 2019-01-11 00:15:00 issue NaT
2 D1 2019-01-11 00:30:00 issue NaT
3 D1 2019-01-11 00:45:00 issue NaT
4 D1 2019-01-11 01:00:00 issue NaT
5 D1 2019-01-11 01:15:00 Resolved 01:15:00
6 D1 2019-01-11 01:30:00 Resolved NaT
7 D2 2019-01-11 01:45:00 issue NaT
8 D2 2019-01-11 02:00:00 Resolved 00:15:00
9 D1 2019-01-11 01:45:00 issue NaT
10 D1 2019-01-11 02:00:00 Resolved 00:15:00
The biggest problem is how to group your issues/resolved properly, which can be done by a reversed cumsum
:
df["Time"] = pd.to_datetime(df["Time"])
df["group"] = (df["Condition"].eq("Resolved")&df["Condition"].shift(-1).eq("issue"))[::-1].cumsum()[::-1]
df["diff"] = (df[~df.duplicated(["Condition","group"])].groupby("group")["Time"].transform(lambda d: d.diff()))
print (df)
Device Time Condition group diff
0 D1 2019-01-11 00:00:00 issue 2 NaT
1 D1 2019-01-11 00:15:00 issue 2 NaT
2 D1 2019-01-11 00:30:00 issue 2 NaT
3 D1 2019-01-11 00:45:00 issue 2 NaT
4 D1 2019-01-11 01:00:00 issue 2 NaT
5 D1 2019-01-11 01:15:00 Resolved 2 01:15:00
6 D1 2019-01-11 01:30:00 Resolved 2 NaT
7 D2 2019-01-11 01:45:00 issue 1 NaT
8 D2 2019-01-11 02:00:00 Resolved 1 00:15:00
9 D1 2019-01-11 01:45:00 issue 0 NaT
10 D1 2019-01-11 02:00:00 Resolved 0 00:15:00
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.