简体   繁体   中英

Simple linear regression with constraint

I have developed an algorithm to loop through 15 variables and produce a simple OLS for each variable. Then the algorithm loops a further 11 times to produce the same 15 OLS regressions but the lag of the X variable increases by one each time. I select the independent variables with the highest r^2 and use the optimum lag for 3,4 or 5 variables

ie

Y_t+1 - Y_t = B ( X_t+k - X_t) + e

My dataset looks like this:

Regression = pd.DataFrame(np.random.randint(low=0, high=10, size=(100, 6)), 
                columns=['Y', 'X1', 'X2', 'X3', 'X4','X5'])

The OLS regression I have fitted so far uses the following code:

Y = Regression['Y']
X = Regression[['X1','X2','X3']]

Model = sm.OLS(Y,X).fit()
predictions = Model.predict(X)

Model.summary()

The issue is that with OLS, you can get negative coefficients (which I do). I'd appreciate the help in constraining this model with the following:

sum(B_i) = 1

B_i >= 0

This works nicely,

from scipy.optimize import minimize

# Define the Model
model = lambda b, X: b[0] * X[:,0] + b[1] * X[:,1] + b[2] * X[:,2]

# The objective Function to minimize (least-squares regression)
obj = lambda b, Y, X: np.sum(np.abs(Y-model(b, X))**2)

# Bounds: b[0], b[1], b[2] >= 0
bnds = [(0, None), (0, None), (0, None)]

# Constraint: b[0] + b[1] + b[2] - 1 = 0
cons = [{"type": "eq", "fun": lambda b: b[0]+b[1]+b[2] - 1}]

# Initial guess for b[1], b[2], b[3]:
xinit = np.array([0, 0, 1])

res = minimize(obj, args=(Y, X), x0=xinit, bounds=bnds, constraints=cons)

print(f"b1={res.x[0]}, b2={res.x[1]}, b3={res.x[2]}")

#Save the coefficients for further analysis on goodness of fit

beta1 = res.x[0]

beta2 = res.x[1]

beta3 = res.x[2]

Per the comments, here is an example of using scipy's differential_evolution module to determine bounded parameter estimates. This module internally uses the Latin Hypercube algorithm to ensure a thorough search of parameter space and requires bounds within which to search, though these bounds can be generous. By default, the differential_evolution module will internally end with a call to curve_fit() using the bounds - this can be disabled - and to ensure the final fitted parameters are not bounded this example makes a later call to curve_fit without passing the bounds. You can see from the printed results that the call to differential_evolution shows the first parameter bounded at -0.185, and this is not the case for results from the later call to curve_fit(). In your case, you could make the lower bound zero so that the parameter is not negative, but if if the code results in a parameter at or very near the bound this is not optimal as shown in this example.

import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings

xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])


def func(t, n_0, L, offset): #exponential curve fitting function
    return n_0*numpy.exp(-L*t) + offset


# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
    warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
    val = func(xData, *parameterTuple)
    return numpy.sum((yData - val) ** 2.0)


def generate_Initial_Parameters():
    # min and max used for bounds
    maxX = max(xData)
    minX = min(xData)
    maxY = max(yData)
    minY = min(yData)

    parameterBounds = []
    parameterBounds.append([-0.185, maxX]) # seach bounds for n_0
    parameterBounds.append([minX, maxX]) # seach bounds for L
    parameterBounds.append([0.0, maxY]) # seach bounds for Offset

    # "seed" the numpy random number generator for repeatable results
    result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
    return result.x

# by default, differential_evolution completes by calling
# curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
print('fit with parameter bounds (note the -0.185)')
print(geneticParameters)
print()

# second call to curve_fit made with no bounds for comparison
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)

print('re-fit with no parameter bounds')
print(fittedParameters)
print()

modelPredictions = func(xData, *fittedParameters) 

absError = modelPredictions - yData

SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))

print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)

print()


##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
    f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
    axes = f.add_subplot(111)

    # first the raw data as a scatter plot
    axes.plot(xData, yData,  'D')

    # create data for the fitted equation plot
    xModel = numpy.linspace(min(xData), max(xData))
    yModel = func(xModel, *fittedParameters)

    # now the model as a line plot
    axes.plot(xModel, yModel)

    axes.set_xlabel('X Data') # X axis data label
    axes.set_ylabel('Y Data') # Y axis data label

    plt.show()
    plt.close('all') # clean up after using pyplot

graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

I am not aware that you can restrict the coefficients easily , I have two solution alternatives ,

1- Use the inverse(1/x) of the time series that results in negative coefficient. This will require you to do the normal regression first and than inverse the ones that has negative relationships. Get the weights and do wi/sum(wi).

2-It seems that you are dealing with time series, use logarithmic differences(np.log(ts).diff().dropna()) as inputs and get the weights. Divide it by sum of weights if necessary and revert back your estimations by np.exp(predicted_ts.cumsum()).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM