简体   繁体   English

带约束的简单线性回归

[英]Simple linear regression with constraint

I have developed an algorithm to loop through 15 variables and produce a simple OLS for each variable.我开发了一种算法来循环遍历 15 个变量并为每个变量生成一个简单的 OLS。 Then the algorithm loops a further 11 times to produce the same 15 OLS regressions but the lag of the X variable increases by one each time.然后算法再循环 11 次以产生相同的 15 个 OLS 回归,但 X 变量的滞后每次增加 1。 I select the independent variables with the highest r^2 and use the optimum lag for 3,4 or 5 variables我选择具有最高 r^2 的自变量并使用 3,4 或 5 个变量的最佳滞后

ie IE

Y_t+1 - Y_t = B ( X_t+k - X_t) + e

My dataset looks like this:我的数据集如下所示:

Regression = pd.DataFrame(np.random.randint(low=0, high=10, size=(100, 6)), 
                columns=['Y', 'X1', 'X2', 'X3', 'X4','X5'])

The OLS regression I have fitted so far uses the following code:到目前为止,我已经拟合的 OLS 回归使用以下代码:

Y = Regression['Y']
X = Regression[['X1','X2','X3']]

Model = sm.OLS(Y,X).fit()
predictions = Model.predict(X)

Model.summary()

The issue is that with OLS, you can get negative coefficients (which I do).问题是使用 OLS,您可以获得负系数(我就是这样做的)。 I'd appreciate the help in constraining this model with the following:我很感激通过以下方式限制此模型的帮助:

sum(B_i) = 1

B_i >= 0

This works nicely,这很好用,

from scipy.optimize import minimize

# Define the Model
model = lambda b, X: b[0] * X[:,0] + b[1] * X[:,1] + b[2] * X[:,2]

# The objective Function to minimize (least-squares regression)
obj = lambda b, Y, X: np.sum(np.abs(Y-model(b, X))**2)

# Bounds: b[0], b[1], b[2] >= 0
bnds = [(0, None), (0, None), (0, None)]

# Constraint: b[0] + b[1] + b[2] - 1 = 0
cons = [{"type": "eq", "fun": lambda b: b[0]+b[1]+b[2] - 1}]

# Initial guess for b[1], b[2], b[3]:
xinit = np.array([0, 0, 1])

res = minimize(obj, args=(Y, X), x0=xinit, bounds=bnds, constraints=cons)

print(f"b1={res.x[0]}, b2={res.x[1]}, b3={res.x[2]}")

#Save the coefficients for further analysis on goodness of fit

beta1 = res.x[0]

beta2 = res.x[1]

beta3 = res.x[2]

Per the comments, here is an example of using scipy's differential_evolution module to determine bounded parameter estimates.根据评论,这里是使用 scipy 的差分进化模块来确定有界参数估计的示例。 This module internally uses the Latin Hypercube algorithm to ensure a thorough search of parameter space and requires bounds within which to search, though these bounds can be generous.该模块在内部使用拉丁超立方体算法来确保对参数空间进行彻底搜索,并且需要在其中进行搜索的边界,尽管这些边界可能很大。 By default, the differential_evolution module will internally end with a call to curve_fit() using the bounds - this can be disabled - and to ensure the final fitted parameters are not bounded this example makes a later call to curve_fit without passing the bounds.默认情况下,different_evolution 模块将在内部以使用边界调用 curve_fit() 结束 - 这可以禁用 - 并确保最终拟合参数不受限制,此示例稍后调用 curve_fit 而不通过边界。 You can see from the printed results that the call to differential_evolution shows the first parameter bounded at -0.185, and this is not the case for results from the later call to curve_fit().您可以从打印的结果中看到,对different_evolution 的调用显示第一个参数的边界为-0.185,而后面对curve_fit() 调用的结果并非如此。 In your case, you could make the lower bound zero so that the parameter is not negative, but if if the code results in a parameter at or very near the bound this is not optimal as shown in this example.在您的情况下,您可以将下限设为零,以便参数不为负,但如果代码导致参数处于或非常接近边界,则这不是最佳的,如本例所示。

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])


def func(t, n_0, L, offset): #exponential curve fitting function
    return n_0*numpy.exp(-L*t) + offset


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(xData)
    minX = min(xData)
    maxY = max(yData)
    minY = min(yData)

    parameterBounds = []
    parameterBounds.append([-0.185, maxX]) # seach bounds for n_0
    parameterBounds.append([minX, maxX]) # seach bounds for L
    parameterBounds.append([0.0, maxY]) # seach bounds for Offset

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling
# curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
print('fit with parameter bounds (note the -0.185)')
print(geneticParameters)
print()

# second call to curve_fit made with no bounds for comparison
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)

print('re-fit with no parameter bounds')
print(fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData))
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

I am not aware that you can restrict the coefficients easily , I have two solution alternatives ,我不知道你可以很容易地限制系数,我有两种解决方案,

1- Use the inverse(1/x) of the time series that results in negative coefficient. 1- 使用导致负系数的时间序列的逆 (1/x)。 This will require you to do the normal regression first and than inverse the ones that has negative relationships.这将要求您首先进行正常回归,然后将具有负面关系的回归。 Get the weights and do wi/sum(wi).获取权重并计算 wi/sum(wi)。

2-It seems that you are dealing with time series, use logarithmic differences(np.log(ts).diff().dropna()) as inputs and get the weights. 2-似乎您正在处理时间序列,使用对数差异(np.log(ts).diff().dropna()) 作为输入并获得权重。 Divide it by sum of weights if necessary and revert back your estimations by np.exp(predicted_ts.cumsum()).如有必要,将其除以权重总和,并通过 np.exp(predicted_ts.cumsum()) 恢复您的估计。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM