简体   繁体   中英

Is there a way to write a custom cumulative aggregate function with groupby clause for pandas dataframe?

Here's my dataframe

+--------+-------------+----------+---------------+------------+-------------+-----------+
|        | Customer ID | Quantity | Invoice Value |       Date | InvoiceDate | UnitPrice |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|    0   |   500249347 |      0.0 |         0.000 | 2018-01-02 |  2018-01-02 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|    1   |   500006647 |      1.0 |        33.715 | 2018-01-02 |  2018-01-02 |    33.715 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|    2   |   500407469 |      1.0 |        33.715 | 2018-01-02 |  2018-01-02 |    33.715 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|    3   |   500642846 |      0.0 |         0.000 | 2018-01-02 |  2018-01-02 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|    4   |   500005450 |      1.0 |        33.715 | 2018-01-02 |  2018-01-02 |    33.715 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
|   ...  |         ... |      ... |           ... |        ... |         ... |       ... |
+--------+-------------+----------+---------------+------------+-------------+-----------+
| 429545 |   500717072 |      1.0 |        45.620 | 2019-03-31 |  2019-03-31 |    45.620 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
| 429546 |   500105174 |      0.0 |         0.000 | 2019-03-31 |  2019-03-31 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
| 429547 |   500069720 |      0.0 |         0.000 | 2019-03-31 |  2019-03-31 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
| 429548 |   500105528 |      0.0 |         0.000 | 2019-03-31 |  2019-03-31 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+
| 429549 |   500732322 |      0.0 |         0.000 | 2019-03-31 |  2019-03-31 |     0.000 |
+--------+-------------+----------+---------------+------------+-------------+-----------+

I want to extract features (new columns) like days since last visit for each customer ( wrt.. snapshot date for each row), last billed amount, last non-zero billed amount, quantity and days since last purchase etc. can this be done using a some custom cumulative aggregate function or if there is a simpler way of doing it?

I would suggest something like this:

import pandas as pd
df = pd.DataFrame({'customer_id': [13, 16, 13, 13, 16, 16, 13],
                   'Date': ['2018-01-02', '2019-03-31', '2019-03-31', '2018-01-02', '2018-01-02', '2019-04-31',
                            '2018-01-02'],
                   'Invoice_value': [920, 920, 920, 920, 921, 921, 921],
                   'Unit_price': [1, 2, 3, 4, 6, 7, 8]})

append_data = [df[(df['customer_id'] == ac)].sort_values(by=['Date']).iloc[-1] for ac in df.customer_id.unique()]

For time since last visit, I figured something like this:

df['last_visited']=df.groupby('Customer ID')['Date'].diff()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM