简体   繁体   English

scikit-learn 或 statsmodels 中线性回归的调整参数的限制范围

[英]limit bounds of tuning parameters for linear regression in scikit-learn or statsmodels

Is it possible to limit the bounds of tuning parameters for linear regression in scikit-learn or statsmodels, eg in statsmodels.regression.linear_model.OLS or sklearn.linear_model.LinearRegression?是否可以限制 scikit-learn 或 statsmodels 中线性回归的调整参数范围,例如在 statsmodels.regression.linear_model.OLS 或 sklearn.linear_model.LinearRegression 中?

http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html

http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html

EDIT :编辑


scipy 0.17 includes scipy.optimize.leastsq with bound constraints: scipy 0.17 包括具有边界约束的 scipy.optimize.leastsq:

http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.optimize.least_squares.html#scipy.optimize.least_squares http://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.optimize.least_squares.html#scipy.optimize.least_squares

Ideally what I'm looking is to minimize objective error function and also minimize the change of tuning multiplier parameters from default value of 1.0.理想情况下,我正在寻找的是最小化目标误差函数,并最小化调整乘数参数从默认值 1.0 的变化。 This is likely part of objective function.这可能是目标函数的一部分。

Note that this is the list of options that worked for my box bounds:请注意,这是适用于我的框边界的选项列表:

method='trf' or 'dogbox'
loss='cauchy'
f_scale=1e-5 to 1e-2

Not sure what you mean by "limit the bounds of tuning parameters".不确定“限制调整参数的界限”是什么意思。

  • If you'd like the result components to lie in pre-specified ranges, you could try scipy.optimize.least_squares , which solves如果您希望结果组件位于预先指定的范围内,您可以尝试scipy.optimize.least_squares ,它解决了

    minimize F(x) = 0.5 * sum(rho(f_i(x)**2), i = 0, ..., m - 1) subject to lb <= x <= ub最小化 F(x) = 0.5 * sum(rho(f_i(x)**2), i = 0, ..., m - 1) 受 lb <= x <= ub

  • If you're concerned about the result components being too large in magnitude due to colinearity, you can try sklearn.linear_model.Ridge (or one of the other regularized linear regressors there).如果您担心结果分量由于共线性而幅度过大,您可以尝试sklearn.linear_model.Ridge (或那里的其他正则化线性回归量之一)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM