Trouble adding input signals in Neural ODE training
6 views (last 30 days)
Show older comments
Hello everyone,
The example deals with autonomous systems and my goal is simply to move from this to .
The data generation is no problem, as I simply replace :
trueModel = @(t,y) A*y;
[~, xTrain] = ode45(trueModel, t, x0, odeOptions);
with the following lines:
trueModel = @(t,y,u) A*y+B*u;
[~, xTrain] = ode45(@(t,y) trueModel(t,y,u), t, x0, odeOptions);
The example then creates mini-batch = 200 which creates a set of initial condition x0 of size [2 200], and targets [2 200 40], as the neuralOdeTimesteps = 40;
I have updated the code to split the inputs used for data generation to create a variable inputs [1 200 40].
My problem comes with the modelLoss function, when calling the neural ODE model. The value of the state during learning for each mini-batch is obtained thanks to :
X = dlode45(@odeModel,tspan,X0,neuralOdeParameters,DataFormat="CB");
I was thinking of updating this function to add the corresponding input:
X = dlode45(@(t,x,p) odeModel(t,x,p,u), tspan,X0,neuralOdeParameters,DataFormat="CB");
But finally I still don't know how to update the function odeModel to take into account the input sequence ...
I hope this make sense and thank you in advance!
0 Comments
Answers (1)
Ben
on 17 Mar 2023
Hi,
What data do you have for your input signal ?
If you can write a function for , e.g. , then the @(t,x,p) odeModel(t,x,p,u) you use in dlode45 can pass u as a function_handle to odeModel, and you evaluate u(t) inside odeModel.
For example:
p.A = dlarray(randn(2));
p.B = dlarray(randn(2));
u = @(t) [sin(t);cos(t)];
odeModel = @(t,y,p,u) p.A*y + p.B*u(t); % note that u(t) is evaluated here
y0 = dlarray([1;-1]);
tspan = [0,1];
y = dlode45(@(t,y,p) odeModel(t,y,p,u), tspan, y0, p, DataFormat="CB");
If you only have samples of u, e.g. for some sample times , then you need to define how odeModel should evaluate at an arbitrary time t in your tspan. A nice way to do this is with an interpolator, e.g. griddedInterpolant. The above example might look like this in that case:
p.A = dlarray(randn(2));
p.B = dlarray(randn(2));
% Suppose we only have samples of u(t) at 10 times t in [0,1]
sampleTimes = linspace(0,1,10);
sampleU = [sin(sampleTimes);cos(sampleTimes)];
% Since odeModel needs to evaluate u(t) at arbitrary t, create an interpolation of u over the samples
% Note that griddedInterpolant needs the first dimension of the sample values to have size equal to
% the number of sample times - so we have to do some transposes here.
uinterp = griddedInterpolant(sampleTimes,sampleU.');
u = @(t) uinterp(t).';
% Now you can evaluate u(t) at arbitrary t and follow the above example
test = u(0.123)
% Copying the above example with interpolated u
odeModel = @(t,y,p,u) p.A*y + p.B*u(t); % note that u(t) is evaluated here
y0 = dlarray([1;-1]);
tspan = [0,1];
y = dlode45(@(t,y,p) odeModel(t,y,p,u), tspan, y0, p, DataFormat="CB");
To use this in the example you need to pass the function_handle or interpolator for u down to the model function so that it can be used to parameterize the odeModel function as above. It's up to you to decide how should be used in the odeModel function, you could compute something like as above, pass through a separate neural network, or concatenate and together and pass through a neural network.
Let me know if something here doesn't make sense.
0 Comments
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!