I have a function in Python. I would like to make it a lot faster? Does anyone have any tips?
def garchModel(e2, omega=0.01, beta=0.1, gamma=0.8 ):
sigma = np.empty( len( e2 ) )
sigma[0] = omega
for i in np.arange( 1, len(e2) ):
sigma[i] = omega + beta * sigma[ i-1 ] + gamma * e2[ i-1 ]
return sigma
The following code works, but there's too much trickery going on, I am not sure it is not depending on some undocumented implementation detail that could eventually break down:
from numpy.lib.stride_tricks import as_strided
from numpy.core.umath_tests import inner1d
def garch_model(e2, omega=0.01, beta=0.1, gamma=0.8):
n = len(e2)
sigma = np.empty((n,))
sigma[:] = omega
sigma[1:] += gamma * e2[:-1]
sigma_view = as_strided(sigma, shape=(n-1, 2), strides=sigma.strides*2)
inner1d(sigma_view, [beta, 1], out=sigma[1:])
return sigma
In [75]: e2 = np.random.rand(1e6)
In [76]: np.allclose(garchModel(e2), garch_model(e2))
Out[76]: True
In [77]: %timeit garchModel(e2)
1 loops, best of 3: 6.93 s per loop
In [78]: %timeit garch_model(e2)
100 loops, best of 3: 17.5 ms per loop
I have tried using Numba, which for my dataset gives a 200x improvement!
Thanks for all the suggestions above, but I can't get them to give me the correct answer. I will try to read up about linear filters, but it's Friday night now and I'm a bit too tired to take in anymore information.
from numba import autojit
@autojit
def garchModel2(e2, beta=0.1, gamma=0.8, omega=0.01, ):
sigma = np.empty( len( e2 ) )
sigma[0] = omega
for i in range( 1, len(e2) ):
sigma[i] = omega + beta * sigma[ i-1 ] + gamma * e2[ i-1 ]
return sigma
This is a solution based on @stx2 's idea. One potential problem is that beta**N
may cause float point overflow when N
becomes large (same with cumprod
).
>>> def garchModel2(e2, omega=0.01, beta=0.1, gamma=0.8):
wt0=cumprod(array([beta,]*(len(e2)-1)))
wt1=cumsum(hstack((0.,wt0)))+1
wt2=hstack((wt0[::-1], 1.))*gamma
wt3=hstack((1, wt0))[::-1]*beta
pt1=hstack((0.,(array(e2)*wt2)[:-1]))
pt2=wt1*omega
return cumsum(pt1)/wt3+pt2
>>> garchModel([1,2,3,4,5])
array([ 0.01 , 0.811 , 1.6911 , 2.57911 , 3.467911])
>>> garchModel2([1,2,3,4,5])
array([ 0.01 , 0.811 , 1.6911 , 2.57911 , 3.467911])
>>> f1=lambda: garchModel2(range(5))
>>> f=lambda: garchModel(range(5))
>>> T=timeit.Timer('f()', 'from __main__ import f')
>>> T1=timeit.Timer('f1()', 'from __main__ import f1')
>>> T.timeit(1000)
0.01588106868331031
>>> T1.timeit(1000) #When e2 dimension is samll, garchModel2 is slower
0.04536693909403766
>>> f1=lambda: garchModel2(range(10000))
>>> f=lambda: garchModel(range(10000))
>>> T.timeit(1000)
35.745981961394534
>>> T1.timeit(1000) #When e2 dimension is large, garchModel2 is faster
1.7330512676890066
>>> f1=lambda: garchModel2(range(1000000))
>>> f=lambda: garchModel(range(1000000))
>>> T.timeit(50)
167.33835501439427
>>> T1.timeit(50) #The difference is even bigger.
8.587259274572716
I didn't use beta**N
but cumprod
instead. **
will probably slow it down a lot.
Your calculation is a linear filter of the sequence omega + gamma*e2
, so you can use scipy.signal.lfilter
. Here's a version of your calculation, with appropriate tweaks of the initial conditions and input of the filter to generate the same output as garchModel
:
import numpy as np
from scipy.signal import lfilter
def garch_lfilter(e2, omega=0.01, beta=0.1, gamma=0.8):
# Linear filter coefficients:
b = [1]
a = [1, -beta]
# Initial condition for the filter:
zi = np.array([beta*omega])
# Preallocate the output array, and set the first value to omega:
sigma = np.empty(len(e2))
sigma[0] = omega
# Apply the filter to omega + gamma*e2[:-1]
sigma[1:], zo = lfilter(b, a, omega + gamma*e2[:-1], zi=zi)
return sigma
Verify that it gives the same result as @Jaime's function:
In [6]: e2 = np.random.rand(1e6)
In [7]: np.allclose(garch_model(e2), garch_lfilter(e2))
Out[7]: True
It is a lot faster than garchModel
, but not as fast as @Jaime's function.
Timing for @Jaime's garch_model
:
In [8]: %timeit garch_model(e2)
10 loops, best of 3: 21.6 ms per loop
Timing for garch_lfilter
:
In [9]: %timeit garch_lfilter(e2)
10 loops, best of 3: 26.8 ms per loop
As @jaime shows, there's a way. However, I don't know if there's a way to rewrite the function, make it much faster, and keep it simple.
An alternative approach then, is using optimization "magics", such as cython or numba .
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.