Is it possible to limit the bounds of tuning parameters for linear regression in scikit-learn or statsmodels, eg in statsmodels.regression.linear_model.OLS or sklearn.linear_model.LinearRegression?
http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
EDIT :
scipy 0.17 includes scipy.optimize.leastsq with bound constraints:
Ideally what I'm looking is to minimize objective error function and also minimize the change of tuning multiplier parameters from default value of 1.0. This is likely part of objective function.
Note that this is the list of options that worked for my box bounds:
method='trf' or 'dogbox'
loss='cauchy'
f_scale=1e-5 to 1e-2
Not sure what you mean by "limit the bounds of tuning parameters".
If you'd like the result components to lie in pre-specified ranges, you could try scipy.optimize.least_squares
, which solves
minimize F(x) = 0.5 * sum(rho(f_i(x)**2), i = 0, ..., m - 1) subject to lb <= x <= ub
If you're concerned about the result components being too large in magnitude due to colinearity, you can try sklearn.linear_model.Ridge
(or one of the other regularized linear regressors there).
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.