Main Content

When the Solver Might Have Succeeded

Final Point Equals Initial Point

The initial point seems to be a local minimum or solution because the first-order optimality measure is close to 0. You might be unhappy with this result, since the solver did not improve your initial point.

If you are unsure that the initial point is truly a local minimum, try:

  1. Starting from different points — see Change the Initial Point.

  2. Checking that your objective and constraints are defined correctly (for example, do they return the correct values at some points?) — see Check your Objective and Constraint Functions. Check that an infeasible point does not cause an error in your functions; see Iterations Can Violate Constraints.

  3. Changing tolerances, such as OptimalityTolerance, ConstraintTolerance, and StepTolerance — see Use Appropriate Tolerances.

  4. Scaling your problem so each coordinate has about the same effect — see Rescale the Problem.

  5. Providing gradient and Hessian information — see Provide Analytic Gradients or Jacobian and Provide a Hessian.

Local Minimum Possible

The solver might have reached a local minimum, but cannot be certain because the first-order optimality measure is not less than the OptimalityTolerance tolerance. (To learn more about first-order optimality measure, see First-Order Optimality Measure.) To see if the reported solution is reliable, consider the following suggestions.

1. Nonsmooth Functions
2. Rerun Starting at Final Point
3. Try a Different Algorithm
4. Change Tolerances
5. Rescale the Problem
6. Check Nearby Points
7. Change Finite Differencing Options
8. Provide Analytic Gradients or Jacobian
9. Provide a Hessian

1. Nonsmooth Functions

If you try to minimize a nonsmooth function, or have nonsmooth constraints, “Local Minimum Possible” can be the best exit message. This is because the first-order optimality conditions do not apply at a nonsmooth point.

To satisfy yourself that the solution is adequate, try to Check Nearby Points.

2. Rerun Starting at Final Point

Restarting an optimization at the final point can lead to a solution with a better first-order optimality measure. A better (lower) first-order optimality measure gives you more reason to believe that the answer is reliable.

For example, consider the following minimization problem, taken from the example Using Symbolic Mathematics with Optimization Toolbox Solvers:

options = optimoptions('fminunc','Algorithm','quasi-newton');
funh = @(x)log(1 + (x(1) - 4/3)^2 + 3*(x(2) - (x(1)^3 - x(1)))^2);
[xfinal fval exitflag] = fminunc(funh,[-1;2],options)

Local minimum possible.

fminunc stopped because it cannot decrease the 
objective function along the current search direction.

xfinal =
    1.3333
    1.0370

fval =
  8.5265e-014

exitflag =
     5

The exit flag value of 5 indicates that the first-order optimality measure was above the function tolerance. Run the minimization again starting from xfinal:

[xfinal2 fval2 exitflag2] = fminunc(funh,xfinal,options)

Local minimum found.

Optimization completed because the size of the gradient is 
less than the default value of the function tolerance.

xfinal2 =
    1.3333
    1.0370

fval2 =
  6.5281e-014

exitflag2 =
     1

The local minimum is “found,” not “possible,” and the exitflag is 1, not 5. The two solutions are virtually identical. Yet the second run has a more satisfactory exit message, since the first-order optimality measure was low enough: 7.5996e-007, instead of 3.9674e-006.

3. Try a Different Algorithm

Many solvers give you a choice of algorithm. Different algorithms can lead to the use of different stopping criteria.

For example, Rerun Starting at Final Point returns exitflag 5 from the first run. This run uses the quasi-newton algorithm.

The trust-region algorithm requires a user-supplied gradient. betopt.m contains a calculation of the objective function and gradient:

function [f gradf] = betopt(x)

g = 1 + (x(1)-4/3)^2 + 3*(x(2) - (x(1)^3-x(1)))^2;
f = log(g);
gradf(1) = 2*(x(1)-4/3) + 6*(x(2) - (x(1)^3-x(1)))*(1-3*x(1)^2);
gradf(1) = gradf(1)/g;
gradf(2) = 6*(x(2) - (x(1)^3 -x(1)))/g;

Running the optimization using the trust-region algorithm results in a different exitflag:

options = optimoptions('fminunc','Algorithm','trust-region','SpecifyObjectiveGradient',true);
[xfinal3 fval3 exitflag3] = fminunc(@betopt,[-1;2],options)

Local minimum possible.

fminunc stopped because the final change in function value 
relative to its initial value is less than the default value 
of the function tolerance.

xfinal3 =
    1.3333
    1.0370

fval3 =
  7.6659e-012

exitflag3 =
     3

The exit condition is better than the quasi-newton condition, though it is still not the best. Rerunning the algorithm from the final point produces a better point, with extremely small first-order optimality measure:

[xfinal4 fval4 eflag4 output4] = fminunc(@betopt,xfinal3,options)

Local minimum found.

Optimization completed because the size of the gradient is 
less than the default value of the function tolerance.

xfinal4 =

    1.3333
    1.0370

fval4 =
     0

eflag4 =
     1

output4 = 
         iterations: 1
          funcCount: 2
       cgiterations: 1
      firstorderopt: 7.5497e-11
          algorithm: 'trust-region'
            message: 'Local minimum found.

Optimization completed because the size o...'
    constrviolation: []

4. Change Tolerances

Sometimes tightening or loosening tolerances leads to a more satisfactory result. For example, choose a smaller value of OptimalityTolerance in the Try a Different Algorithm section:

options = optimoptions('fminunc','Algorithm','trust-region',...
    'OptimalityTolerance',1e-8,'SpecifyObjectiveGradient',true); % default=1e-6
[xfinal3 fval3 eflag3 output3] = fminunc(@betopt,[-1;2],options)

Local minimum found.

Optimization completed because the size of the gradient is 
less than the selected value of the function tolerance.

xfinal3 =
    1.3333
    1.0370

fval3 =
     0

eflag3 =
     1

output3 = 
         iterations: 15
          funcCount: 16
       cgiterations: 12
      firstorderopt: 7.5497e-11
          algorithm: 'trust-region'
            message: 'Local minimum found.

Optimization completed because the size...'
    constrviolation: []

fminunc took one more iteration than before, arriving at a better solution.

5. Rescale the Problem

Try to have each coordinate give about the same effect on the objective and constraint functions by scaling and centering. For examples, see Center and Scale Your Problem.

6. Check Nearby Points

Evaluate your objective function and constraints, if they exist, at points near the final point. If the final point is a local minimum, nearby feasible points have larger objective function values. See Check Nearby Points for an example.

If you have a Global Optimization Toolbox license, try running the patternsearch (Global Optimization Toolbox) solver from the final point. patternsearch examines nearby points, and accepts all types of constraints.

7. Change Finite Differencing Options

Central finite differences are more time-consuming to evaluate, but are much more accurate. Use central differences when your problem can have high curvature.

To choose central differences at the command line, use optimoptions to set 'FiniteDifferenceType' to 'central', instead of the default 'forward'.

8. Provide Analytic Gradients or Jacobian

If you do not provide gradients or Jacobians, solvers estimate gradients and Jacobians by finite differences. Therefore, providing these derivatives can save computational time, and can lead to increased accuracy. The problem-based approach can provide gradients automatically; see Automatic Differentiation in Optimization Toolbox.

For constrained problems, providing a gradient has another advantage. A solver can reach a point x such that x is feasible, but finite differences around x always lead to an infeasible point. In this case, a solver can fail or halt prematurely. Providing a gradient allows a solver to proceed.

Provide gradients or Jacobians in the files for your objective function and nonlinear constraint functions. For details of the syntax, see Writing Scalar Objective Functions, Writing Vector and Matrix Objective Functions, and Nonlinear Constraints.

To check that your gradient or Jacobian function is correct, use the checkGradients function, as described in Checking Validity of Gradients or Jacobians.

If you have a Symbolic Math Toolbox™ license, you can calculate gradients and Hessians programmatically. For an example, see Calculate Gradients and Hessians Using Symbolic Math Toolbox.

For examples using gradients and Jacobians, see Minimization with Gradient and Hessian, Nonlinear Constraints with Gradients, Calculate Gradients and Hessians Using Symbolic Math Toolbox, Solve Nonlinear System Without and Including Jacobian, and Large Sparse System of Nonlinear Equations with Jacobian. For automatic differentiation in the problem-based approach, see Effect of Automatic Differentiation in Problem-Based Optimization.

9. Provide a Hessian

Solvers often run more reliably and with fewer iterations when you supply a Hessian.

The following solvers and algorithms accept Hessians:

If you have a Symbolic Math Toolbox license, you can calculate gradients and Hessians programmatically. For an example, see Calculate Gradients and Hessians Using Symbolic Math Toolbox. To provide a Hessian in the problem-based approach, see Supply Derivatives in Problem-Based Workflow.

The example in Calculate Gradients and Hessians Using Symbolic Math Toolbox shows fmincon taking 77 iterations without a Hessian, but only 19 iterations with a Hessian.