I have been iterating over a dictionary of arrays and applying linear regression for each array element in the dictionary.
from sklearn.linear_model import LinearRegression
model = LinearRegression()
for i in my_dict.keys():
test = model.fit(x_val.reshape(-1,1), my_dict[i].reshape(-1,1))
coeff = float(test.coef_)
intercept = float(test.intercept_)
my_dict[i] = lambda x: coeff * x + intercept
At each iteration, I'm pretty confident that the proper coeff and intercepts are being assigned to the lambda function. However, it seems that every stored lambda function in the dictionary is using the coefficient and intercepts for the "last" key in the dictionary. I can't seem to put my finger on why that is. Thanks!
Edit: I'm aware I can just assign the linear regressor object to each key instead of using a lambda function (I just preferred lambda functions). However, that hasn't solved this problem.
This is a bit of a quirk in Python -- variable lookup in closures is based on the combination of containing scope and variable name. Since your lambdas are defined at module scope (note that for
loops do not create a new scope), and since the names coeff
and intercept
aren't changing, each lookup will always be the values from the last iteration of the loop.
To fix this, you can do one of:
my_dict[i] = lambda x: coeff * x + intercept
in a local functioncoeff
and intercept
into the definition of the lambda by capturing them as default arguments: my_dict[i] = lambda x, coeff=coeff, intercept=intercept: coeff * x + intercept
coeff
and intercept
values inside of my_dict
(or some other container), then pull them out when you need them.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.