简体   繁体   中英

Sumproduct equivalent python code takes too long to run

I am currently trying to create an average revenue column for each fruit column in my dataset.

The dataset set looks like this:

                   Time  England Apples  ...  England Watermelons  England Price
0        1/01/2011 0:30     6135.998518  ...             0.000000          25.00
1        1/01/2011 1:00     5711.638352  ...             0.000000          24.43
2        1/01/2011 1:30     5455.901902  ...             0.000000          25.02
3        1/01/2011 2:00     5130.634418  ...             0.000000          22.82
4        1/01/2011 2:30     4854.064390  ...             0.000000          21.19
5        1/01/2011 3:00     4654.938155  ...             0.000000          22.28
6        1/01/2011 3:30     4413.649635  ...             0.000000          19.64
7        1/01/2011 4:00     4153.377478  ...             0.000000          19.83
8        1/01/2011 4:30     4099.620177  ...             0.000000          19.80
9        1/01/2011 5:00     4041.403822  ...             0.000000          18.85
10       1/01/2011 5:30     4097.059952  ...             0.000000          19.49
11       1/01/2011 6:00     4074.397538  ...             0.000000          18.68
12       1/01/2011 6:30     4141.839692  ...             0.000000          20.03
13       1/01/2011 7:00     4463.231217  ...             0.000000          21.92
14       1/01/2011 7:30     4727.591175  ...             0.000000          21.48
15       1/01/2011 8:00     4842.730830  ...             0.000000          20.88
16       1/01/2011 8:30     5206.647033  ...             0.000000          24.87
17       1/01/2011 9:00     5533.648183  ...             0.000000          25.24
18       1/01/2011 9:30     5921.572143  ...             0.000000          25.31
19      1/01/2011 10:00     6279.324155  ...             0.000000          25.32
20      1/01/2011 10:30     6709.511942  ...             0.000000          25.31
21      1/01/2011 11:00     6978.742550  ...             0.000000          25.54
22      1/01/2011 11:30     7110.139363  ...             0.000000          27.86
23      1/01/2011 12:00     7063.761970  ...             0.000000          24.49
24      1/01/2011 12:30     6992.549385  ...             0.000000          25.31
25      1/01/2011 13:00     6961.793427  ...             0.000000          25.26
26      1/01/2011 13:30     7055.875967  ...             0.000000          25.31
27      1/01/2011 14:00     7142.211047  ...             0.000000          25.31
28      1/01/2011 14:30     7228.536090  ...             0.000000          26.35
29      1/01/2011 15:00     7299.410813  ...             0.000000          27.52
...                 ...             ...  ...                  ...            ...
142002   6/02/2019 9:30     7676.377063  ...           330.175727         111.45
142003  6/02/2019 10:00     7670.922868  ...           331.714652         114.43
142004  6/02/2019 10:30     7658.970773  ...           315.955275         115.47
142005  6/02/2019 11:00     7654.404070  ...           331.450534         118.27
142006  6/02/2019 11:30     7634.777022  ...           329.376822         130.77
142007  6/02/2019 12:00     7663.339550  ...           308.338850         127.27
142008  6/02/2019 12:30     7668.300007  ...           308.836712         128.69
142009  6/02/2019 13:00     7633.525948  ...           313.522324         156.85
142010  6/02/2019 13:30     7614.107300  ...           317.741907         165.16
142011  6/02/2019 14:00     7647.885410  ...           318.575012         139.67
142012  6/02/2019 14:30     7758.311397  ...           300.859020         129.19
142013  6/02/2019 15:00     7792.523983  ...           288.397673         265.37
142014  6/02/2019 15:30     7849.658337  ...           268.816729         262.73
142015  6/02/2019 16:00     7962.783263  ...           260.514448         257.19
142016  6/02/2019 16:30     8008.872848  ...           217.321907         164.39
142017  6/02/2019 17:00     8001.217682  ...           196.016162         129.90
142018  6/02/2019 17:30     8002.191668  ...           155.652355         106.81
142019  6/02/2019 18:00     8051.317657  ...            79.418596         112.66
142020  6/02/2019 18:30     8079.327247  ...            36.547664         103.34
142021  6/02/2019 19:00     8056.183235  ...             9.403131         110.64
142022  6/02/2019 19:30     8060.892678  ...             0.306932         115.63
142023  6/02/2019 20:00     8083.306235  ...             0.000000         109.97
142024  6/02/2019 20:30     7928.332383  ...             0.000000         108.33
142025  6/02/2019 21:00     7736.462477  ...             0.000000          92.86
142026  6/02/2019 21:30     7439.131347  ...             0.000000          88.37
142027  6/02/2019 22:00     7080.748895  ...             0.000000          82.93
142028  6/02/2019 22:30     6991.127062  ...             0.000000          90.36
142029  6/02/2019 23:00     6922.695807  ...             0.000000          77.94
142030  6/02/2019 23:30     6850.425935  ...             0.000000          83.39
142031   7/02/2019 0:00     6666.447972  ...             0.000000          82.67

[142032 rows x 7 columns]

I am trying to add a new column for each fruit which will be the average revenue over a 200time period (equivalent in excel to doing SUMPRODUCT(Apples:Price)/SUM(Apples)

The code I have to do this in python works fine for a small dataset, however with my large dataset it takes way to long to run (over 20mins).

My code is as follows:

import pandas as pd
import numpy as np

df = pd.read_csv("england_raw.csv")

size = 200


max_size = df.shape[0]

for a in [' Apples',' Oranges',' Pears',' Apricots',' Watermelons']:
    e = 'England' + a + '_W'
    df[e] = np.empty(max_size)
    for i in range(max_size-size):
        df[e][i] = np.average(df['England Price'][i:i+size], weights=df['England'+a][i:i+size])

df.to_csv("england_done.csv",index=False)

Is there any way to modify my code to speed up the process time, or even use a different approach to achieve my desired result?

Thanks.

Desired Result (excel equivalent):

期望结果

EDIT:

编辑

Is this what you're looking for? It gives you the rolling mean for a window of 200 points in each of the columns.

# Intermediate columns for calculations
df['revenue'] = 0
df['roll_rev_sum'] = 0
df['roll_qty_sum'] = 0

# Please adjust your column index accordingly. This is quite a brute solution
for col in df.columns[1:-1]:
    e = 'England' + col + '_W'
    df['revenue'] = df[col] * df['England Price']
    df['roll_rev_sum'] = df.loc[:,'revenue'].rolling(200).sum()
    df['roll_qty_sum'] = df.loc[:,col].rolling(200).sum()
    df[e] = df['roll_rev_sum']/df['roll_qty_sum']

Edit: Updated to include the intermediate column described in comments and also include further details specified by the OP.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM