简体   繁体   中英

fmincon optimization with defined Matlab function

Is it possible to use the optimization function fmincon with a Matlab defined function?

I wrote a function where I give few constant parameters (real or complex) and for now, every time I change these parameters, the result changes (you don't say).

[output1, output2] = my_function(input1,input2,input3,input4)

I saw that fmincon function allows to find the optimum result with a given constraint. Let's say that I want to find the optimum output acting only on input1 and keeping constant all the others inputs. Is it possible to define something like

fmincon(@(input1)my_function,[1,2],[],mean)

for input1 that goes from 1 to 2 for the best value mean , where mean is the mean value of some other results.

I know that is a quite vague question but I'm not able to give a minimum example, since function makes a lot of things

The first attept with multiple outputs gave me the error Only functions can return multiple values.

Then I tried with only one output and if I use

output1 = @(input1)function(input2,input3);
fmincon(@output1,[1,2],[],mean)

I get the error

Error: "output1" was previously used as a variable, conflicting with its use here as the name of a function or command. See "How MATLAB Recognizes Command Syntax" in the MATLAB documentation for details.

With fmincon(@my_function,[1,2],[],mean) I get Not enough input arguments.

The input should be used in your function definition - read up on how anonymous functions should be written. You don't have to use anonymous functions to define the actual objective function ( myFunction below), you can use functions in their own file. The key is that the objective function should return a scalar to be minimised.

Here is a very simple example, using fmincon to find the minima in myFunction , based on the initial guess [1.5,1.5] .

% myFunction is min when x=1,y=2
myFunction = @(x,y) (x-1).^2 + (y-2).^2;
% Define the optimisation function.
% This should take one input (can be an array) 
% and output a scalar to be minimised
optimFunc = @(P) myFunction( P(1), P(2) );

% Use fmincon to find the optimum solution, based on some initial guess
optimSoln = fmincon( optimFunc, [1.5, 1.5] );

% >> optimSoln
% optimSoln =
%     0.999999990065893   1.999999988824129

% Optimal x = optimSoln(1), optimal y = optimSoln(2);

You can see the calculated optimum isn't exactly [1,2] , but it's within the default optimality tolerance. You can change the options for the fmincon solver - read the documentation .


If you wanted to keep y=1 as a constant, you just need to update the function definition:

% We only want solutions with y=1
optimFunc_y1 = @(P) myFunction( P(1), 1 ); % y=1 always
% Find new optimal solution
optimSoln_y1 = fmincon( optimFunc_y1, 1.5 );

% >> optimSoln_y1
% optimSoln_y1 = 
%    0.999999990065893
% Optimal x when y=1 = optimSoln(1)

You can add inequality constraints using the A , B , Aeq and Beq inputs to fmincon , but that's too broad to go into here, please refer to the docs.


Note that you're using the keyword function in a way which is invalid syntax. I've instead used valid variable names for the functions in my demo.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM