Does intlinprog find a local minimum or global minimum?

I have a problem that can mathematically be described as a MILP. However, it is essential that I find the global optimum (if it exists) and not just a feasible point. So I understand that the relevant matlab function is intlinprog. According to Matlab documentation (https://www.mathworks.com/help/optim/ug/local-vs-global-optima.html), I know that linprog guarantees global optimum and not just a local one. My question is does intlinprog guarantee the same or not?

 Accepted Answer

In ideal math, yes, however real world computers can't do ideal math, so you can be significantly off from the global solution depending on how you set your tolerances.. This is illustrated in the example below. The ideal global minimum is y=1, but intlinprog finds y=0, because it is within the default numerical tolerances.
x=optimvar('x',1,'type','integer','LowerBound',0);
y=optimvar('y',1,'type','integer','LowerBound',0);
prob=optimproblem('Objective',y, 'Constraints', y>=x+1e-4);
sol=solve(prob)
Solving problem using intlinprog. LP: Optimal objective value is 1.000000. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.RelativeGapTolerance = 0.0001 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-05 (the default value).
sol = struct with fields:
x: 0 y: 0

5 Comments

Thanks for the response. Is there a way to adjust the options (tolerances, time, numer of iterations etc) to guarantee global optimum? Given that the range of decision variables in my system are around 10-30 (half of which are integers)
Not that I know of. The global minimum itself is an uncertain thing that can vary discontinuously due to small uncertainties in the problem data. Consider the two solutions below. If the difference between b1 and b2 is meant to be significant, then the solver has correctly given us different solutions. However, if the difference between b1 and b2 is just the result of floating point calculation errors, and they are really supposed to be the same, then one of the solutions is wrong. We can't know which one, because we don't know which b is the correct one.
x=optimvar('x',1,'type','integer','LowerBound',0);
y=optimvar('y',1,'type','integer','LowerBound',0);
b1=1e-7;
b2=0;
prob1=optimproblem('Objective',y,'Constraints', y>=x+b1);
prob2=optimproblem('Objective',y, 'Constraints', y>=x+b2);
opts=optimoptions('intlinprog','ConstraintTolerance',1e-8,'IntegerTolerance',1e-6);
solve(prob1,'options',opts)
Solving problem using intlinprog. LP: Optimal objective value is 1.000000. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 0 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-06 (the selected value).
ans = struct with fields:
x: 0 y: 1
solve(prob2,'options',opts)
Solving problem using intlinprog. LP: Optimal objective value is 0.000000. Optimal solution found. Intlinprog stopped at the root node because the objective value is within a gap tolerance of the optimal value, options.AbsoluteGapTolerance = 0 (the default value). The intcon variables are integer within tolerance, options.IntegerTolerance = 1e-06 (the selected value).
ans = struct with fields:
x: 0 y: 0
Since intlinprog still lives in a double precision world, you can never set tolerances that will insure there will never be a problem of the type @Matt J pointed out.
My alternative approach is a 'brute-force' method which is something along this line: I list all possible combinations of the integer variables (each one ranges from -2 to 2 or narrower). There are about 5-10 of these variables, so end up with somewhere between 5000 to 500,000 combinations. Then using a for loop, I evaluate the remainder of the constraints using linprog for each combination of the integer variables and store the optimum of each one. Then at the end, the global optimum would be the optimum of the optimums. Would this be more reliable than intlinprog for a problem of this scale?
I don't think so, because let's apply your proposal to my earlier example, adding an upper bound of 2 on both x and y.
x=optimvar('x',1,'type','integer','LowerBound',0,'UpperBound',2);
y=optimvar('y',1,'type','integer','LowerBound',0,'UpperBound',2);
prob=optimproblem('Objective',y, 'Constraints', y>=x+1e-8);
In this case, there are no non-integer variables, so your method reduces to simply evaluating all 9 combinations 0<=(x,y)<=2.
But how will you decide whether the combination (x,y)=(0,0) is supposed to be feasible or not, bearing in mind that the 1e-8 might just be floating point noise? The decision you make will change the solution and its optimum value by 1.

Sign in to comment.

More Answers (2)

MILP is NP-hard problem. All solvers require some heuristic rules and the global optimum can not guarenteed for larger problems.
Another issue is that it is easy to formulate a problem with multiple solutions, all equally good. intlinprog should find one of them, but any solution is as good as another. These will typically lie along a constraint boundary, or parallel to one. So is that a global solution or a local one? It depends on how you choose to define what a local solution means to you.

Asked:

on 8 Nov 2021

Edited:

on 8 Nov 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!