Testing different optimization methods on a state space model by solving ODEs

5 views (last 30 days)
I’m trying to test different optimization methods (including my own) by estimating parameters of a state space model. My initial idea was to include both the update rules and optimization methods in the ODE function, like below:
function dpdt = MyODE(t, p, u, y, Ts)
% p = [w11, w12, w21, w22, theta11, theta12, theta21, theta22]
if t==0
adj_t=1;
else
adj_t=floor(t/Ts) + 1;
end
pn1 = 1;
pn2 = 2;
dpdt = zeros(8, 1);
lambda = 0.1; % Learning rate
dpdt(1) = p(2);
dpdt(3) = p(4);
dpdt(2) = u(adj_t) - (p(2)*(pn1+pn2) + p(1)*pn1*pn2);
dpdt(4) = u(adj_t) - (p(4)*(pn1+pn2) + p(3)*pn1*pn2);
dpdt(5:end) = -lambda * (dot(p(5:end), p(1:4)) - y(adj_t)).*p(1:4); % Gradient Descent
end
In this function, u, y and are passed-in global parameters. The model is continuous but input signal u and output signal y are sampled. Therefore I need to deal with time. I can call it using
[t, p] = ode23s(@(t, p) MyODE(t, p, u, y, Ts), tspan, p0);
But the problem is that the parameters that I’m estimating are not converging. I’m sure the input and output don’t have problems, because I have tried to use the functions (idss, ssest, etc) in System Identification toolbox to successfully identify the parameters. I doubt that it is because the way I deal with time messed up the ODE solver’s own step size. It would be great if anyone can give me some advice on this.
I am also aware that this way of testing the optimization methods is not ideal. Several functions (pem, idgrey, etc.) in the system identification toolbox are much better than what I have done. The problem is that I want to plot out how the parameter are changing during the optimization process, yet from these functions I cannot get how the parameters change (at least not in an easy way). If I were intended to extend the optimization functions used by pem or ssest, where should I get started?

Accepted Answer

Rajiv Singh
Rajiv Singh on 11 Aug 2020
The closest infrastructure to this style of numerical optimization approach is "grey box identification". See idgrey, idnlgrey, greyest, nlgreyest. See:
for more information. Grey box estimation approach basically faciliates a numeric optimization scheme for ODE parameter estimation. You can set esitmation option "Display" to "full" to view the parameter values over iterations. You can try various solvers including lsqnonlin, fmincon from Optimization Toolbox to solve. Perhaps a way of studying the performance of your own algorithm would be to set up the grey-box estimation problem (choice of initial parameter values, their bounds, handling of initial conditions, choice of solver etc) in a similar way and compare cost values over iterations.
  1 Comment
Yingnan Cui
Yingnan Cui on 11 Aug 2020
I realize that what I am doing is actually online learning. The last four parameters in theta are what I want to estimate, given the input u and output y. The input needs to be carefully constructed to meet with the persistent excitation condition. The estimated parameters need to be available in one run. And therefore the greyest probably doesn't help here.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!