Solving an optimization problem in fmincon with nonlinear constraints

3 views (last 30 days)
Hi All,
I have the following dynamical systems.
Equation 1:
Equation 2:
Equation 1 represents the exact dynamics of a system and equation 2 is the approximate dynamics that will give the same time course profiles as equation 1, upon optimization. For the sake of understanding, let us assume eq(1) gives experimental values and equation 2 are the predicted values.
The objective function defined has a cost function that minimizes the difference between state variables ϕ and , by optimizing parameter which are the control variables.
I'm trying to solve this as a parameter estimation problem with non-linear equality constraints .The non-linear constraints have been written by discretizing equation (2) using trapezoidal collocation.
I have defined function [c ceq] = defects( ) that accepts as input argument and the and d values necessary for setting up the constraints are obtained by calling a nested function defined within defects` .This nested function accepts input arguments ode15s(@(t,) fun(t,, )`.
I've implemented the above , a few iterations are carried out by fmincon and there is a dimension error that pops up. I am not sure why this occurs after a few iterations and not at the first iteration itself.
Error using cellfun
All of the input arguments must be of the same size and shape.
Previous inputs had size 1 in dimension 1. Input #4 has size 51
Error in cse_03_19_20/defects (line 117)
f = cellfun(@predicted,num2cell(t),num2cell(phi_tilde,2),num2cell(Dhat_mat,2),'UniformOutput',false);
Error in barrier
Error in fmincon (line 824)
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = barrier(funfcn,X,A,B,Aeq,Beq,l,u,confcn,options.HessFcn, ...
Error in cse_03_19_20 (line 9)
Dhat= fmincon(@objfun,Dhat0,[],[],[],[],[],[],@defects)
Could someone help me in finding out what leads to dimension error after a few iterations? Please find the code attached
EDIT: I've updated the file attached and also used dbstop if error as suggestd below
Now the error that occurs is
Matrix dimensions must agree.
Error in cse_03_19_20/objfun (line 31)
f = (phi - phi_tilde).*(phi - phi_tilde);
Error in barrier
Error in fmincon (line 824)
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = barrier(funfcn,X,A,B,Aeq,Beq,l,u,confcn,options.HessFcn, ...
Error in cse_03_19_20 (line 10)
Dhat = fmincon(@objfun,Dhat0,[],[],[],[],[],[],[], opts)
But the same code gives optimal solution when lsqnonlin is used instead of fmincon.
Any suggestions on why fmincon fails to give optimal solution will be helpful
  20 Comments
Deepa Maheshvare
Deepa Maheshvare on 29 Mar 2020
@Ameer
Thank you. I checked the default algorithm used by both fmincon and lsqnonlin
fmincon
algorithm : interior point (default)
lsqnonlin:
algorithm : trust region reflective(default)
I tried to use trust region reflective in fmincon (because trust region reflective used in lsqnonlin gives more accurate predictions). But I couldn't. In the documentation on `Choosing the algorithm`
it is mentioned "'trust-region-reflective' requires you to provide a gradient"
I am not sure how to provide gradient for my objective function.
For instance, no I have 10 variables. Should I compute the partial derivative of the objective function wrt 10 variables?
If yes, this will be a vector ]
Is there a way to do this in MATLAB automatically? And how should these gradients be evaluated?
Any explanation in this direction will be really helpful.
Ameer Hamza
Ameer Hamza on 29 Mar 2020
Theoretically, you can write down your objective function and analytically find its gradient by using a little bit of knowledge about multivariate calculus linear algebra. However, I don't think it will significantly improve the performance since fmincon and lsqnonlin are already given the solution of the same order of magnitude.

Sign in to comment.

Accepted Answer

Walter Roberson
Walter Roberson on 28 Mar 2020
Warning: Matrix is singular, close to singular or badly scaled. Results may be inaccurate. RCOND = NaN.
> In ode15s (line 589)
In cse_03_19_20/objfun (line 32)
In barrier
In fmincon (line 824)
In cse_03_19_20 (line 14)
(a whole bunch of those, then)
Warning: Failure at t=2.205524e-02. Unable to meet integration tolerances without reducing the step size below the smallest value allowed
(5.551115e-17) at time t.
> In ode15s (line 668)
In cse_03_19_20/objfun (line 32)
In barrier
In fmincon (line 824)
In cse_03_19_20 (line 14)
Matrix dimensions must agree.
Error in cse_03_19_20/objfun (line 37)
f = (phi - phi_tilde).*(phi - phi_tilde);
Error in barrier
Error in fmincon (line 824)
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = barrier(funfcn,X,A,B,Aeq,Beq,l,u,confcn,options.HessFcn, ...
Error in cse_03_19_20 (line 14)
Dhat = fmincon(@objfun,Dhat0,[],[],[],[],[],[],[], opts)
37 f = (phi - phi_tilde).*(phi - phi_tilde);
Notice the failure at t=2.205524e-02. Because of that, the ode15s call is returning early, before it has processed all 51 timestamps that you requested output at. The first call for exact solutions did all 51, but the second call for approximate only gets 3 of them. When you do phi - phi_tilde under that circumstance, you are subtracting arrays of different sizes.
  12 Comments
Deepa Maheshvare
Deepa Maheshvare on 12 Apr 2020
Thanks a lot for sharing details. Could you please offer some suggestions on how to proceed further?
Walter Roberson
Walter Roberson on 12 Apr 2020
There isn't any "further" to proceed to. You pass all 5000's into your function and you see that you get exactly 0 back. You examine your function and see that you are effectively calculating a norm, and you observe that norms can never be negative, so you realize that your function can never return negative and so 0 is the optimal for it.
If your goal is to optimize this function, then you take the 5000's and you are done with it.
If your goal is to have this function as a sample problem to demonstrate how to get the optimization tools to find global minima, then you also need to observe that the input options on tolerances make a notable difference into when to give up. If you did not know that a perfect fit was possible, and you were getting results from your function that were on the order of 1e-21 with all the values near to 5000, then how would you know whether to give up or not? If you want the numeric optimizers to prove that they have reached a global minima, then you need to give up on that idea -- symbolic calculations are the only hope to prove that you have reached a global minima.

Sign in to comment.

More Answers (0)

Categories

Find more on Robust Control Toolbox in Help Center and File Exchange

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!