fmincon solution does not differ from initial guess if I provide gradient
5 views (last 30 days)
Show older comments
Davide Manfredo
on 9 Apr 2024
Commented: Davide Manfredo
on 10 Apr 2024
Hello everyone,
I am currently facing a problem when providing the analytic gradient of the objective function to be minimised.
I need to minimise a functional depending on a state variable vector of length 3*N+4, with N being in the order of 10. I want some linear equality constraints to be respected, hence I impose them by providing a suitable matrix Aeq and a suitable vector beq.
I noticed that the solution is not satisfactory, hence I decided to provide the analytic gradient of the functional. In order to do so, I define my objective function as
function [f,g]=objective_function(x,parameters)
where f is the objective scalar function and g is the gradient vector of size 3*N+4. In my main script, in order for my objective function to depend only on x, I define the handle function
function_to_minimise=@(x) objective_function(x,given_parameters)
where given_parameters are given in the main script.
Of course, I also insert the following option
options=optimoptions('fmincon','GradObj','on');
If I don't provide the gradient, I obtain a solution that is not satisfactory, meaning that what I get as a solution is not consistent with the expected results and the theory.
I noticed that if I provide the gradient, the fmincon routine stops after 1 or 2 iterations and the result is the same as the initial guess. In order to check if the analytic gradient is correct, I checked it with the numerical gradient of fmincon for some well known cases, and the results are the same, so I figured that the analytical gradient that I provide is correct.
Notes:
- Since I only have equality constraints, I do not have to include the gradient of the constraints.
- I tried to optimise the functional with all possible
Is there something that I am missing which might cause this problem?
P.s. I hope I gave enought details and that my explanation is clear enough.
0 Comments
Accepted Answer
Bruno Luong
on 9 Apr 2024
To get more info turn the option 'CheckGradients' to 'on', and check the exitflag (third output) of fmincon
3 Comments
Bruno Luong
on 10 Apr 2024
Edited: Bruno Luong
on 10 Apr 2024
"Is there a way to solve this problem with fmincon?"
No fmincon is the solver that assumes C1 objective function and constraints, you cannot relax it.
However if your have term with detivative that jump you might tweet it to make C1, exampe replace abs(x) by
x.^2 ./ sqrt(x.^2 + epsilon) to make the function round around x = 0.
Or replace step function (if else) by logistic function ("soft" logical).
More Answers (1)
Torsten
on 9 Apr 2024
Moved: Torsten
on 9 Apr 2024
Use
SpecifyObjectiveGradient
Gradient for the objective function defined by the user. See the description of fun to see how to define the gradient in fun. The default, false, causes fmincon to estimate gradients using finite differences. Set to true to have fmincon use a user-defined gradient of the objective function. To use the 'trust-region-reflective' algorithm, you must provide the gradient, and set SpecifyObjectiveGradient to true.
For optimset, the name is GradObj and the values are 'on' or 'off'. See Current and Legacy Option Names.
instead of 'GradObj','on' if you use "optimoptions" and not "optimset".
1 Comment
Bruno Luong
on 9 Apr 2024
Shouldn't be a problem
"optimoptions accepts both legacy and current names"
See Also
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!