简体   繁体   中英

Reshape a dataset with Start and End Dates to create a Time Series counting aggregate sum by day/month/quarter

I have a dataset exactly like this:

ProjectID   Start   End Type
Project 1   01/01/2019  27/04/2019  HR
Project 2   15/01/2019  11/11/2019  Marketing
Project 3   25/02/2019  30/07/2019  Finance
Project 4   22/02/2019  15/04/2019  HR
Project 5   05/03/2019  29/09/2019  HR
Project 6   11/04/2019  01/12/2019  Marketing
Project 7   29/07/2019  23/08/2019  Finance
Project 8   25/08/2019  23/12/2019  Operations
Project 9   31/10/2019  29/11/2019  Operations
Project 10  10/12/2019  25/12/2019  Operations

I want to know over time, how many projects are outstanding by creating a daily/monthly/quarterly time series. I first want to create a sum of overall projects but then also know how many are outstanding by project type. From doing this manually in excel, I believe I have to resample the data somehow but i'm not sure how and on what dimensions... When I do this in excel, the output should in the end look like this:

在此处输入图片说明

and

在此处输入图片说明

在此处输入图片说明

how do I reshape the data with pandas to allow for this analysis?

One way to do this is to take a date range (for example for 1 year) and then join all projects to all days. I'm using hvplot to create a nice interactive plot of the end result.

Here's a working example with your sample data:

from io import StringIO
import pandas as pd
import hvplot.pandas

text = """
ProjectID   Start   End Type
Project1   01/01/2019  27/04/2019  HR
Project2   15/01/2019  11/11/2019  Marketing
Project3   25/02/2019  30/07/2019  Finance
Project4   22/02/2019  15/04/2019  HR
Project5   05/03/2019  29/09/2019  HR
Project6   11/04/2019  01/12/2019  Marketing
Project7   29/07/2019  23/08/2019  Finance
Project8   25/08/2019  23/12/2019  Operations
Project9   31/10/2019  29/11/2019  Operations
Project10  10/12/2019  25/12/2019  Operations
"""

df = pd.read_csv(StringIO(text), header=0, sep='\s+')
df['Start'] = pd.to_datetime(df['Start'], dayfirst=True)
df['End'] = pd.to_datetime(df['End'], dayfirst=True)

# create a dummy key with which we can join all projects with all dates
df['key'] = 'key'

# create a daterange so that we can count all open projects for all days
df2 = pd.DataFrame(pd.date_range(start='01-01-2019', periods=365, freq='d'), columns=['date'])
# create a dummy key with which we can join all projects with all dates
df2['key'] = 'key'

# join all dates with all projects on dummy key = cartesian product
df3 = pd.merge(df, df2, on=['key'])

# check if date is within project dates
df3['count_projects'] = df3['date'].ge(df3['Start']) & df3['date'].le(df3['End'])

# group per day: count all open projects
group_overall = df3.groupby(
    'date', as_index=False)['count_projects'].sum()

# group per day per department: count all projects 
group_per_department = df3.groupby(
    ['date', 'Type'], as_index=False)['count_projects'].sum()

# plot overall result
plot_overall = group_overall.hvplot.line(
    x='date', y='count_projects',
    title='Open projects Overall',
    width=1000,
)

# plot per department
plot_per_department = group_per_department.hvplot.line(
    x='date', y='count_projects', 
    by='Type',
    title='Open projects per Department',
    width=1000,
)

# show both plots using hvplot
(plot_overall + plot_per_department).cols(1)

Resulting plot:

使用 hvplot 绘制所有项目的计数

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM