简体   繁体   中英

How do you stop the scipy.optimize.fmin function and start it again using the optimization history?

For optimizations that take a long time (tens of minutes, hours, days, or months), it is necessary to be able to start an optimization again using the optimization history in case of a program failure or power failure, but the fmin algorithm (and several others) do not provide the history as an output or as an input. How can you save and use the results of the optimization history of fmin to ensure that all of your computational investment won't be lost?

I had this question yesterday morning and couldn't find the answer anywhere online, so I put together my own solution. See below.

Basically, the answers to real-time monitoring, history recording, and resuming fmin results in using the callable function to store the function inputs and outputs into a lookup table. Here is how it's done:

import numpy as np
import scipy as sp
import scipy.optimize

To store the history, create a global history vector for the inputs and a global history vector for the objective function values given the inputs. I initialized the initial input vector here as well:

x0 = np.array([1.05,0.95])
x_history = np.array([[1e8,1e8]])
fx_history = np.array([[1e8]])

I'm optimizing Rosenbrock here function since it's the typical optimzation algorithm:

def rosen(x):
    """The Rosenbrock function"""
    return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)

Create a wrapper for the function you are optimizing and write to the global variables whenever new inputs are requested from the optimization function and the objective function value is computed. For each iteration, search the history to find whether or not the objective function value for the requested vector has been calculated previously, and use the previously calculated value if it has been. To "simulate" a power failure, I created a variable called powerFailure to end the optimization before it is finished converging. I then turned powerFailure off to see the optimization finish.

def f(x):
    global firstPoint, iteration, x_history, fx_history

    iteration = iteration + 1
    powerFailure = True
    failedIteration = 10
    previousPoint = False
    eps = 1e-12

    if powerFailure == True:
        if iteration == failedIteration:
            raise Exception("Optimization Ended Early Due to Power Failure")

    for i in range(len(x_history)):
        if abs(x_history[i,0]-x[0])<eps and abs(x_history[i,1]-x[1])<eps:
            previousPoint = True
            firstPoint=False
            fx = fx_history[i,0]
            print "%d: f(%f,%f)=%f (using history)" % (iteration,x[0],x[1],fx)

    if previousPoint == False:
        fx = rosen(x)
        print "%d: f(%f,%f)=%f" % (iteration,x[0],x[1],fx)

    if firstPoint == True:
        x_history = np.atleast_2d([x])
        fx_history = np.atleast_2d([fx])
        firstPoint = False
    else:
        x_history = np.concatenate((x_history,np.atleast_2d(x)),axis=0)  
        fx_history = np.concatenate((fx_history,np.atleast_2d(fx)),axis=0)

    return fx

Finally, we run the optimization.

firstPoint = True
iteration = 0

xopt, fopt, iter, funcalls, warnflag, allvecs = sp.optimize.fmin(f,x0,full_output=True,xtol=0.9,retall=True)

With the power failure the optimization ends at iteration 9. After "turning the power back on" the function prints

1: f(1.050000,0.950000)=2.328125 (using history)
2: f(1.102500,0.950000)=7.059863 (using history)
3: f(1.050000,0.997500)=1.105000 (using history)
4: f(0.997500,0.997500)=0.000628 (using history)
5: f(0.945000,1.021250)=1.647190 (using history)
6: f(0.997500,1.045000)=0.249944 (using history)
7: f(0.945000,1.045000)=2.312665 (using history)
8: f(1.023750,1.009375)=0.150248 (using history)
9: f(1.023750,0.961875)=0.743420 (using history)
10: f(1.004063,1.024219)=0.025864
11: f(0.977813,1.012344)=0.316634
12: f(1.012266,1.010117)=0.021363
13: f(1.005703,0.983398)=0.078659
14: f(1.004473,1.014014)=0.002569
15: f(0.989707,1.001396)=0.047964
16: f(1.006626,1.007937)=0.002916
17: f(0.995347,1.003577)=0.016564
18: f(1.003806,1.006847)=0.000075
19: f(0.996833,0.990333)=0.001128
20: f(0.998743,0.996253)=0.000154
21: f(1.005049,1.005600)=0.002072
22: f(0.999387,0.999525)=0.000057
Optimization terminated successfully.
         Current function value: 0.000057
         Iterations: 11
         Function evaluations: 22

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM