简体   繁体   中英

Optimizing huge amounts of calls of fsolve in Matlab

I'm solving a pair of non-linear equations for each voxel in a dataset of a ~billion voxels using fsolve() in MATLAB 2016b.

方程对

I have done all the 'easy' optimizations that I'm aware of. Memory localization is OK, I'm using parfor , the equations are in fairly numerically simple form. All discontinuities of the integral are fed to integral() . I'm using the Levenberg-Marquardt algorithm with good starting values and a suitable starting damping constant, it converges on average with 6 iterations.

I'm now at ~6ms per voxel, which is good, but not good enough. I'd need a order of magnitude reduction to make the technique viable. There's only a few things that I can think of improving before starting to hammer away at accuracy:

The splines in the equation are for quick sampling of complex equations. There are two for each equation, one is inside the 'complicated nonlinear equation'. They represent two equations, one which is has a large amount of terms but is smooth and has no discontinuities and one which approximates a histogram drawn from a spectrum. I'm using griddedInterpolant() as the editor suggested.

Is there a faster way to sample points from pre-calculated distributions?

parfor i=1:numel(I1)
    sols = fsolve(@(x) equationPair(x, input1, input2, ... 
         6 static inputs, fsolve options)
    output1(i) = sols(1); output2(i) = sols(2)
end

When calling fsolve , I'm using the 'parametrization' suggested by Mathworks to input the variables. I have a nagging feeling that defining a anonymous function for each voxel is taking a large slice of the time at this point. Is this true, is there a relatively large overhead for defining the anonymous function again and again? Do I have any way to vectorize the call to fsolve ?

There are two input variables which keep changing, all of the other input variables stay static. I need to solve one equation pair for each input pair so I can't make it a huge system and solve it at once. Do I have any other options than fsolve for solving pairs of nonlinear equations?

If not, some of the static inputs are the fairly large. Is there a way to keep the inputs as persistent variables using MATLAB's persistent , would that improve performance? I only saw examples of how to load persistent variables, how could I make it so that they would be input only once and future function calls would be spared from the assumedly largish overhead of the large inputs?

EDIT:

The original equations in full form look like:

高

低

Where:

f_KN

and:

Α

Everything else is known, solving for x_1 and x_2. f_KN was approximated by a spline. S_low (E) and S_high(E) are splines, the histograms they are from look like:

光谱

So, there's a few things I thought of:

Lookup table

Because the integrals in your function do not depend on any of the parameters other than x , you could make a simple 2D-lookup table from them:

% assuming simple (square) range here, adjust as needed
[x1,x2]  = meshgrid( linspace(0, xmax, N) );

LUT_high = zeros(size(x1));
LUT_low  = zeros(size(x1));

for ii = 1:N        

    LUT_high(:,ii) = integral(@(E) Fhi(E, x1(1,ii), x2(:,ii)), ...
                              0, E_high, ...
                              'ArrayValued', true);

    LUT_low(:,ii) = integral(@(E) Flo(E, x1(1,ii), x2(:,ii)), ...
                             0, E_low, ...
                             'ArrayValued', true);

end 

where Fhi and Flo are helper functions to compute those integrals, vectorized with scalar x1 and vector x2 in this example. Set N as high as memory will allow.

Those lookup tables you then pass as parameters to equationPair() (which allows parfor to distribute the data). Then just use interp2 in equationPair() :

F(1) = I_high - interp2(x1,x2,LUT_high, x(1), x(2));
F(2) = I_low  - interp2(x1,x2,LUT_low , x(1), x(2));

So, instead of recomputing the whole integral every time , you evaluate it once for the expected range of x , and reuse the outcomes.

You can specify the interpolation method used, which is linear by default. Specify cubic if you're really concerned about accuracy.

Coarse/Fine

Should the lookup table method not be possible for some reason (memory limitations, in case the possible range of x is too big), here's another thing you could do: split up the whole procedure in 2 parts, which I'll call coarse and fine .

The intent of the coarse method is to improve your initial estimates really quickly, but perhaps not so accurately. The quickest way to approximate that integral by far is via the rectangle method :

  • do not approximate S with a spline , just use the original tabulated data (so S_high/low = [S_high/low@E0, S_high/low@E1, ..., S_high/low@E_high/low]
  • At the same values for E as used by the S data ( E0 , E1 , ...), evaluate the exponential at x :

     Elo = linspace(0, E_low, numel(S_low)).'; integrand_exp_low = exp(x(1)./Elo.^3 + x(2)*fKN(Elo)); Ehi = linspace(0, E_high, numel(S_high)).'; integrand_exp_high = exp(x(1)./Ehi.^3 + x(2)*fKN(Ehi));

    then use the rectangle method:

     F(1) = I_low - (S_low * Elo) * (Elo(2) - Elo(1)); F(2) = I_high - (S_high * Ehi) * (Ehi(2) - Ehi(1));

Running fsolve like this for all I_low and I_high will then have improved your initial estimates x0 probably to a point close to "actual" convergence.

Alternatively, instead of the rectangle method, you use trapz ( trapezoidal method ). A tad slower, but possibly a bit more accurate.

Note that if (Elo(2) - Elo(1)) == (Ehi(2) - Ehi(1)) (step sizes are equal), you can further reduce the number of computations. In that case, the first N_low elements of the two integrands are identical, so the values of the exponentials will only differ in the N_low + 1 : N_high elements. So then just compute integrand_exp_high , and set integrand_exp_low equal to the first N_low elements of integrand_exp_high .

The fine method then uses your original implementation (with the actual integral() s), but then starting at the updated initial estimates from the coarse step.

The whole objective here is to try and bring the total number of iterations needed down from about 6 to less than 2. Perhaps you'll even find that the trapz method already provides enough accuracy, rendering the whole fine step unnecessary.

Vectorization

The rectangle method in the coarse step outlined above is easy to vectorize:

% (uses R2016b implicit expansion rules)

Elo = linspace(0, E_low, numel(S_low));
integrand_exp_low = exp(x(:,1)./Elo.^3 + x(:,2).*fKN(Elo));

Ehi = linspace(0, E_high, numel(S_high));
integrand_exp_high = exp(x(:,1)./Ehi.^3 + x(:,2).*fKN(Ehi));

F = [I_high_vector - (S_high * integrand_exp_high) * (Ehi(2) - Ehi(1))
     I_low_vector  - (S_low  * integrand_exp_low ) * (Elo(2) - Elo(1))];

trapz also works on matrices; it will integrate over each column in the matrix.

You'd call equationPair() then using x0 = [x01; x02; ...; x0N] x0 = [x01; x02; ...; x0N] x0 = [x01; x02; ...; x0N] , and fsolve will then converge to [x1; x2; ...; xN] [x1; x2; ...; xN] [x1; x2; ...; xN] , where N is the number of voxels, and each x0 is 1×2 ( [x(1) x(2)] ), so x0 is N ×2.

parfor should be able to slice all of this fairly easily over all the workers in your pool.

Similarly, vectorization of the fine method should also be possible; just use the 'ArrayValued' option to integral() as shown above:

F = [I_high_vector - integral(@(E) S_high(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
                              0, E_high,...
                              'ArrayValued', true);
    I_low_vector   - integral(@(E) S_low(E) .* exp(x(:,1)./E.^3 + x(:,2).*fKN(E)),...
                              0, E_low,...
                              'ArrayValued', true);
    ];

Jacobian

Taking derivatives of your function is quite easy. Here is the derivative wrt x_1 , and here wrt x_2 . Your Jacobian will then have to be a 2×2 matrix

J = [dF(1)/dx(1)  dF(1)/dx(2)
     dF(2)/dx(1)  dF(2)/dx(2)]; 

Don't forget the leading minus sign ( F = I_hi/lo - g(x)dF/dx = -dg/dx )

Using one or both of the methods outlined above, you can implement a function to compute the Jacobian matrix and pass this on to fsolve via the 'SpecifyObjectiveGradient' option ( via optimoptions ) . The 'CheckGradients' option will come in handy there.

Because fsolve usually spends the vast majority of its time computing the Jacobian via finite differences, manually computing a value for it manually will normally speed the algorithm up tremendously .

It will be faster, because

  1. fsolve doesn't have to do extra function evaluations to do the finite differences
  2. the convergence rate will increase due to the improved precision of the Jacobian

Especially if you use the rectangle method or trapz like above, you can reuse many of the computations you've already done for the function values themselves, meaning, even more speed-up.

Rody's answer was the correct one. Supplying the Jacobian was the single largest factor. Especially with the vectorized version, there were 3 orders of magnitude of difference in speed with the Jacobian supplied and not.

I had trouble finding information about this subject online so I'll spell it out here for future reference: It is possible to vectorize independant parallel equations with fsolve() with great gains.

I also did some work with inlining fsolve(). After supplying the Jacobian and being smarter about the equations, the serial version of my code was mostly overhead at ~1*10^-3 s per voxel. At that point most of the time inside the function was spent passing around a options -struct and creating error-messages which are never sent + lots of unused stuff assumedly for the other optimization functions inside the optimisation function (levenberg-marquardt for me). I succesfully butchered the function fsolve and some of the functions it calls, dropping the time to ~1*10^-4s per voxel on my machine. So if you are stuck with a serial implementation eg because of having to rely on the previous results it's quite possible to inline fsolve() with good results.

The vectorized version provided the best results in my case, with ~5*10^-5 s per voxel.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM