hello, i am working on an optimal control problem which involves differential equations as non linear constraints. how do i use fmincon format to apply those constraints?

14 views (last 30 days)
ProblemStatement.png
  1 Comment
Brian Kaplinger
Brian Kaplinger on 28 Mar 2022
Edited: Brian Kaplinger on 28 Mar 2022
To my knowledge, there is no built-in way to get fmincon to do what you want. As a driver for parameter optimization, fmincon seeks a vector input that results in a local extremal value for an objective function.
The traditional formulation of an optimal control problem applies differential (or integral, depending on form) equations as a continuous constraint. To apply fmincon, you need to transcribe your continuous function problem into a discretized form, where the state values (x, y, z, phi, gamma, V) are listed at individual times. The input vector would become the values of the control functions (nv, nh, T, Pbatt) at those specific times. State constraints (Eqns. 12) would then be enforced at the discrete times. So, for example, if you divided the time interval (0, t_f) up into 100 time steps, you would have 100 different points at which the state constraints were evaluated and the input variable into the obective function for fmincon (the vector it's trying to determine) would have dimension 400, assuming the controls are all scalars. This appears to be a UAS problem in scalar format, given the flight equations used in Eqns. 11. This is not the only way to do it. In some problems, you want the optimizer to guess the actual state values and then you back the control out of the trajectory. There are also hybrid approaches.
How to apply the differential equation constraints now depends on your chosen transcription method (the form in which you change the optimal control problem into a non-linear programming/NLP problem, the kind fmincon can solve). One common method is collocation (https://en.wikipedia.org/wiki/Collocation_method) with probably the most common non-trivial method using Hermite-Simpson collocation. A good review article was written by Matthew Kelly, while he was doing his PhD at Cornell (https://epubs.siam.org/doi/pdf/10.1137/16M1062569) - It's intended to help you follow along. Note that, in the choice of how to discretize the problem you also need to decide how to discretize the objective function (as most compute integrals, and how you do that is different for continuous vs. discrete functions). So, a Hermite-Simpson collocation method would use a cubic (Hermite) spline to approximate the trajectory of the state variables, while using Simpson's quadrature to compute the integral needed in the objective function. With collocation, there are two general methods for enforcing the differential equation constaints. Based on the control input, the differential equations applied as a numerical integration might violate your state constraints so you either need to enforce those strictly in fmincon or add the state constraints as penalties to the objective function (only enforcing the state constraints approximately). Alternatively, if you are also having the routine determine the trajectory as well, then at a set of "collocation points" you determine (which don't have to be the state history points, often they are midway between) you would calculate the difference between the expected differential equation behavior and the actual. This difference (the so-called "defects") would either be posed as equality constraints, or added to the objective function (where they would only be approximately enforced). Rigidly enforcing the differential constraints, either through direct numerical integration, or by adding as equality constraints, would only be recommended in circumstances where your initial guess was a feasible solution.
So, the short answer is that the process of casting an optimal control problem in the form you give into a NLP problem of the form where fmincon can solve it is it's own field of study. There are a bunch of different base methods, and a bunch more different ways of implementing each of these methods. All of these require an initial guess at a solution, which is not always an easy problem in its own right. There are similar approachs that could be applied to the genetic algorithm ('ga'), particle swarm 'particleswarm', or simulated annealing ('simulannealbnd') functions in Matlab, though some of these are less tolerant of nonlinear constraints. State constraints, especially those that are just bounds on the values of the states, are much easier to enforce (but only if you are directly computing the trajectory as well, which increases the dimensionality of the problem and the necessary computing resources).
Edit: I suppose I should mention that there are both community resources and commercial plugins for Matlab that can do the conversion you want if you are not trying to learn how to do it yourself. I refuse to endorse any specific product though.

Sign in to comment.

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!