[英]Error when trying to create new rolling average column based on another column using groupby of two other columns in pandas data frame
[英]Sum and groupby if date is between two dates in two other columns and create new groupby data frame - pandas
我有以下数据框:
我需要的是总结每个“标题”的综合浏览量值并创建两个新列:
所以我的最终表格将是这样的:帖子 ID - 发布日期 - 标题 - 永久链接 - 类别 - 作者姓名 - 总页面浏览量(这是没有任何过滤器的页面浏览量的总和) - 国家 - PT+3 - PT+30
谢谢...
Post ID Published Date Title \
0 824821 2022-05-10 Tom Brady's net worth in 2022
1 824821 2022-05-10 Tom Brady's net worth in 2022
2 824821 2022-05-10 Tom Brady's net worth in 2022
Permalink \
0 https://clutchpoints.com/tom-bradys-net-worth-...
1 https://clutchpoints.com/tom-bradys-net-worth-...
2 https://clutchpoints.com/tom-bradys-net-worth-...
Categories Author Name T+3 T+30 \
0 Editorials|Evergreen|NFL|NFL Editorials Greg Patuto 2022-05-13 2022-06-09
1 Editorials|Evergreen|NFL|NFL Editorials Greg Patuto 2022-05-13 2022-06-09
2 Editorials|Evergreen|NFL|NFL Editorials Greg Patuto 2022-05-13 2022-06-09
country pageviews date
0 Australia 24 2022-05-26
1 India 24 2022-05-24
2 India 12 2022-05-26
好的,所以我怀疑这是最好的方法,但这就是我解决类似问题的方法。
注意:您必须将日期列转换为日期时间类型才能进行比较。 这可能会解决其他评论者的错误
df['Published Date'] = pd.to_datetime(df['Published Date']).apply(lambda x: x.date())
df['date'] = pd.to_datetime(df['date']).apply(lambda x: x.date())
首先,我为 output dataframe 格式创建了一个字典:
aggregate_df = {'Post Id':[],'Published Date':[],'Title':[],'Permalink':[],'Categories':[],'Author Name':[],'Total Page Views':[],'PT+3':[],'PT+30':[]}
然后我遍历标题列中的每个唯一标题,并为每个标题过滤 dataframe。 然后我将每个值附加到 output 字典(其中大多数是 .max() ,但你也可以使用 [0] 例如,你选择哪个值并不重要,因为它们是相同的 - 在总页之外您想要总和的视图)。
然后,您可以进一步过滤 temp df 以仅显示您要计算的范围内的日期,并将 append 这些总和添加到 output 字典中。
for title in df['Title'].unique():
_df = df.loc[(df['Title'] == title)]
aggregate_df['Post Id'].append(_df['Post_Id'].max())
aggregate_df['Published Date'].append(_df['Published Date'].max())
aggregate_df['Title'].append(_df['Title'].max())
aggregate_df['Permalink'].append(_df['Permalink'].max())
aggregate_df['Categories'].append(_df['Categories'].max())
aggregate_df['Author Name'].append(_df['Author Name'].max())
aggregate_df['Total Page Views'].append(_df['Page Views'].sum())
start_period = _df['Published Date'].max()
end_period = _df['Published Date'].max() + dt.timedelta(days=3)
_df = df.loc[(df['Title'] == title) & (df['date'] >= start_period)& (df['date'] <= end_period)]
aggregate_df['PT+3'].append(_df['Page Views'].sum())
start_period = _df['Published Date'].max() + dt.timedelta(days=3)
end_period = _df['Published Date'].max() + dt.timedelta(days=30)
_df = df.loc[(df['Title'] == title) & (df['date'] >= start_period) & (df['date'] <= end_period)]
aggregate_df['PT+30'].append(_df['Page Views'].sum())
aggregate_df = pd.DataFrame(aggregate_df)
IIUC,尝试:
groupby
和sum
df["Views3"] = df["date"].le(df["T+3"]).mul(df["pageviews"]).groupby(df["Title"]).transform("sum")
df["Views30"] = df["date"].le(df["T+30"]).mul(df["pageviews"]).groupby(df["Title"]).transform("sum")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.