I have two lists of n by m-dimensions and would like to run independent regressions on them :
Given two lists :
l = [[l1, l2, l3, l4, l5],[l6, l7, l8, l9, l10]...]
&
n = [[n1, n2, n3, n4, n5], [n6, n7, n8, n9, n10] ...]
I'd like to regress [l1, l2, l3, l4, l5]
with [n1, n2, n3, n4, n5]
and [l6, l7, l8, l9, l10]
with [n6, n7, n8, n9, n10]
(...) and save the beta values to into an empty list.
I originally attempted to simply use :
regression.linear_model.OLS(l, sm.add_constant(n)).fit()
but it doesn't seem to exhibit the desired behaviour.
Doing
[regression.linear_model.OLS(l[x], sm.add_constant(n[x]).fit() for x in range(0, len(l)]
however takes too long to run as I have a over 80000 regressions to run.
This looks like you are bootstrapping, yea? This seems to work fairly fast for me.
import numpy as np
from scipy import stats
#Simulating your data
f = lambda x: 2*x+3 + np.random.normal(0,0.5)
X = [np.random.rand(5) for i in range(80000)]
Y = [f(x) for x in X]
#Store coefficients here
models = []
#Loop through the data
for x,y in zip(X,Y):
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
#Add the coefficient and the intercept to the list
models.append([slope,intercept])
np.array(models[:5])
>>>array([[ 2. , 3.47],
[ 2. , 2.66],
[ 2. , 2.94],
[ 2. , 3.01],
[ 2. , 2.75]])
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.