[英]Splitting data into batches conditional on cumulative sum
I'm trying to batch some data based on a start_date
and end_date
that is conditional of the cumulative sum of which is <= 500000. Say I have a simple data frame with two columns:我正在尝试根据start_date
和end_date
批量处理一些数据,这些数据的条件是累积总和 <= 500000。假设我有一个包含两列的简单数据框:
index Date num_books
0 2021-01-01 200000
1 2021-01-02 240000
2 2021-01-03 55000
3 2021-01-04 400000
4 2021-01-05 80000
5 2021-01-06 100000
I need to do a cumulative sum of the values in num_books
until it has <= 500000 and record the start date, end date and the cumsum value.我需要对num_books
中的值进行累积总和,直到它 <= 500000 并记录开始日期、结束日期和累积值。 This is an example of what I'm trying to achieve这是我想要实现的一个例子
start_date end_date cumsum_books
2021-01-01 2021-01-03 495000
2021-01-04 2021-01-05 480000
2021-01-06 2021-01-06 100000
Is there an efficient way/function to achieve this?有没有一种有效的方法/功能来实现这一目标? Thank you!谢谢!
Here's one way:这是一种方法:
from io import StringIO as sio
d = sio("""
index Date num_books
0 2021-01-01 200000
1 2021-01-02 240000
2 2021-01-03 55000
3 2021-01-04 400000
4 2021-01-05 80000
5 2021-01-06 100000
""")
import pandas as pd
df = pd.read_csv(d, sep='\s+')
batch_num = 5*10**5
df['batch_num'] = df['num_books'].cumsum()//batch_num
result = df.groupby('batch_num').agg(start_date=('Date', 'min'), end_date=('Date', 'max'), cumsum_books=('num_books','sum'))
print(result)
# start_date end_date cumsum_books
#batch_num
#0 2021-01-01 2021-01-03 495000
#1 2021-01-04 2021-01-05 480000
#2 2021-01-06 2021-01-06 100000
Note that the result
dataframe also contains the entry with more than 500_000
, but it's trivial to drop/filter it out.请注意, result
dataframe 还包含超过500_000
的条目,但删除/过滤掉它是微不足道的。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.