AI with Model-Based Design: Reduced Order Modeling
Overview
High-fidelity models, such as those based on FEA (Finite Element Analysis), CFD (Computational Fluid Dynamics), and CAE (Computer-Aided Engineering) models can take hours or even days to simulate, and are not suitable for all stages of development. For example, a finite element analysis model that is useful for detailed component design will be too slow to include in system-level simulations for verifying your control system or to perform system analyses that require many simulation runs. A high-fidelity model for analyzing NOx emissions will be too slow to run in real time in your embedded system. Does this mean you have to start from scratch to create faster approximations of your high-fidelity models? This is where reduced-order modeling (ROM) comes to the rescue. ROM is a set of computational techniques that helps you reuse your high-fidelity models to create faster-running, lower-fidelity approximations.
In this session, you will learn how to create AI-based reduced order models to replace the complex high-fidelity model (a jet engine blade). Using the new Simulink add-on for Reduced Order Modeling, see how you can perform a thorough design of experiments and use the resulting input-output data to train AI models using pre-configured templates of LSTMs, neural ODE, and nonlinear ARX. Learn how to integrate these AI models into your Simulink simulations for control design, Hardware-in-the-Loop (HIL) testing, or deployment to embedded systems for virtual sensor applications.
Highlights
- Creating AI-based reduced order models using the Reduced Order Modeler App
- Integrating trained AI models into Simulink for system-level simulation
- Generating optimized C code and performing HIL tests
About the Presenter
Kishen Mahadevan is a senior product manager in the controls marketing group at MathWorks, where his focus is on driving the promotion and adoption of controls products, and strategically partnering with development teams to steer the product roadmap. Prior to moving into product marketing, Kishen worked for two years as an application support engineer supporting customers with their workflows and questions related to Simulink products, with a focus on physical modeling, controls, and deep learning applications. Kishen holds an M.S. in electrical engineering with a specialization in control systems from Arizona State University and a B.E. in electrical and electronics engineering from Visvesvaraya Technological University in India.
Recorded: 17 Jul 2024
Hello, everyone. Welcome to the session on AI with model-based design, reduced order modeling. My name is Kishan Mahadevan. I'm a product manager within the controls marketing group at MathWorks.
I've been with MathWorks for close to six years now, and I work closely with the development team to help guide the roadmap of our controls and modeling tools. I have a background in electrical engineering with specialization in controls and dynamics. In this session, we'll explore the benefits of replacing a complex, high fidelity model with an AI-based, reduced order model. So here, you can see a closed loop model involving a jet engine blade, and we'll get to explore this in further detail in a few minutes.
This model includes components that represent the ambient conditions such as the temperature and pressure surrounding the blade and a controller that controls the cooling temperature through the blade and the jet engine blade component itself. So the jet engine blade subsystem here solves a finite element analysis model to compute the displacement at the tip of the blade due to a combination of thermal expansion and pressure every timestep. So this FEA model is slow to simulate and is not suitable for control design and hardware-in-the-loop testing.
So the goal here is to replace this model with an AI-based reduced order model which can then be used for designing the controller to control the cooling temperature through the blade and hence to maintain the displacement of the tip within a specific threshold. So throughout this presentation, we'll show you how we got here with the results that you're seeing on your screen and how you can create these AI-based reduced order models and how you can leverage them for control design and hardware in the loop testing.
So at the end of the presentation, you should walk away knowing that you can enable reuse of full order high-fidelity models for system level simulation, hardware-in-the-loop testing, non-linear control design and virtual sensor modeling. That's by creating these reduced order models. And while doing so, you can also explore various techniques in Matlab to find the best method that suits your specific application.
So let's talk about some of the common challenges that engineers face in this context and related to this topic. So one common challenge that comes up in situation is that suppose, say, you're given a high fidelity model from the component modeling group within your organization, and you are tasked with performing hardware-in-the-loop testing with that model. Because the hardware-in-the-loop, because the high-fidelity model takes a long time to run and is of very high fidelity, it's not suitable for running high-fidelity or hardware-in-the-loop testing. So how would you then go about running this in real time?
Next, you realize that you need to make this model run faster, but there are wide range of methods available. How do you create reduced order model that balances accuracy with training speed, inference, speed, and interpretability, among other considerations? So these are some of the common challenges that we've heard and that have come up in this context.
Before diving deeper into the problem, we have mentioned words like models, components, system level modeling, hardware in the loop, and so on. So let's briefly recap what model-based design is. Broadly speaking, model-based design can be described as the systematic use of models throughout the development process.
At a high level, models are used to develop the system architecture, consisting of behavioral models and functional specifications that are associated with specific design components. Those components and systems can represent either the physical system, which can be a physics-based mechanical, electrical, hydraulic, thermal system, or an AI-based data-driven system or the algorithms and software used to control or manage that system, which may also be AI-enabled. As you build the components, you continuously test them, first through simulation and then adding more rigorous techniques and bringing in environment models as you go along and integrate those components into larger systems.
So for today, we're going to be specifically addressing this red piece, and more concretely, we learn how to leverage the AI workflow to integrate AI into model based design. And if we think of elements that you can have in a Simulink model, right, AI can play a relevant role. We can use AI for component modeling, especially for those very high-fidelity models that take a long time to simulate.
And so eventually, we can use them for hardware-in-the-loop and for system level simulation and analysis by using a data-driven approach. Eventually, we might also use this approach when first principles models cannot be obtained. We might also use AI for collection of algorithms that are under development and that in many circumstances might be difficult to implement with other methods. Eventually, we might even want to deploy these algorithms.
The focus today is on AI for component modeling, and we'll be focusing on reduced order modeling. Let's first start with what is reduced order modeling. So, reduced order modeling is a set of techniques for simplifying full order high-fidelity models by reducing the computational complexity while preserving the dominant behavior of the model.
So now why are these techniques important or what are some of the major use cases? There may be several reasons why customers want to-- or you want to create reduced order models. One could be to enable system level simulation by bringing in reduced third party FEA CFD models into Simulink.
The other use case could be for performing hardware-in-the-loop. That is, by reducing a high-fidelity model, you may now enable it to run in real time for hardware-in-the-loop testing, which was not possible before. For performing control design, for example, say take the example of nonlinear MPC, or Model Predictive Controller, where there is a need for a fast, non-linear prediction model that can be used in real time to solve an optimization problem, and there is also a need to run this particular controller in real time in a resource-constrained embedded system.
Other use cases involve developing virtual sensors or digital twins where for the virtual sensor, it could be the case where the signal you want to measure is not directly measurable or when the sensor is too expensive. And for the digital twin use case, it might be that you want to create a reduced order model that is more suitable for periodic updates to the model such that it represents the operational asset. And finally, it can also help in use cases when you need to perform analysis or design studies on your model where the design study would now be at a larger time scale or should be performed at a larger time scale than what the model was initially implemented for.
So these are some of the use cases or motivations behind why create reduced order models. So let's talk a little about the how next. So in terms of how you can create AI-based, data-driven techniques to create reduced order models, where you collect input/output data from a high fidelity first principles model and then use that data to train machine learning, deep learning, or system identification techniques. Now, are AI-based techniques the only options for reduced order modeling?
The answer is no. There are other techniques that you can leverage and that we offer as well, and they include linearization-based methods, where suppose, say, you're working with a nonlinear Simulink model. You can linearize it at an operating point, reduce the number of states, and use the resulting linear model in the vicinity of the operating point to speed up simulation. Or if you need to operate at multiple operating points, you can linearize it at multiple operating points to obtain what we call a linear parameter varying model to simulate the system across those operating conditions and make it valid.
And there are other techniques as well which we term it as model-based techniques where this technique relies on meshing FEA models to extract state space representations that can then be used for system level representation and control design. So while in today's session, we will focus mainly on the AI-based reduced order modeling techniques, please be aware that we have other techniques, and if you need more information on those, please feel free to reach out to us. So, specifically focusing on AI-based data-driven techniques, this is the example that we will be using today, and we'll see how we can replace the high fidelity finite element analysis model, which is the jet engine blade here, with an AI-based reduced order model.
So here is what the top level Simulink model looks like. As you can see, there are different components that represent in such as-- the components that represent the ambient conditions such as temperature and pressure, the controller, the jet engine blade component, and the visualization. The objective in this model is to control the cooling through the engine blade, such that the displacement at the tip of the blade is within a specific threshold. So this is a closed loop system.
So now, specifically talking about the jet engine blade, the jet engine blade component computes the displacement at the tip of the blade by solving these transient heat equations to compute the temperature distribution on the blade. And then it also solves a structural equation to compute the deformation or the maximum displacement due to a combination of this thermal expansion and pressure. And this particular solution or this solving of equations happens every time step for every time point.
So, while in this case, the finite element analysis model was developed using partial differential equation toolbox that we offer, you could also import components designed using third party FEACFT tools into Simulink through FMUs or S-functions to represent these components. So coming back to this jet engine blade component, in these situations, we see that there is a high level of fidelity and detail. That is, in this specific case, the FEA model in question takes approximately seconds to solve both the transient thermal and the structural equation's every time step.
So, for example, for a simulation of around 5,000 seconds, the model will take close, somewhere approximately about eight hours to simulate 5,000 seconds of simulation. That is just to serve as a benchmark of what we are dealing with here. And hence, clearly, this model is not suitable for control design and hardware-in-the-loop testing, and that serves as our motivation for creating our reduced order model.
So in this example, we'll specifically see how we can replace the complex jet engine blade component with a data-driven model that is trained using MATLAB. In this particular case, the inputs to this AI model will be the ambient temperature, ambient pressure, and the cooling temperature values where the output will be the maximum displacement. So while in this case, we're replacing our entire component, which is that jet engine blade component, completely with AI, in situations or in applications, in problems when you want to solve it, where you want to replace only some portion of the complex aircraft model with AI model, please be aware that the trained model can coexist with other first principles model that represent other components of your aircraft in Simulink.
So now in order to come up with this AI model, we'll go through the steps in the AI-driven system design workflow. And this system-driven AI-based workflow involves the data preparation step, where you design experiments and collect data from your model. Following that, you use the data to train algorithms and models. This involves the training modeling step.
And once the models are trained, you then simulate and test those models, validating that the models are performing as expected. And finally, you use those models for deployment. So this is the standard AI-based workflows.
In this context, we are thrilled to announce the launch of our new Simulink add on for reduced order modeling that facilitates this data-driven ROM workflow and addresses multiple challenges that you have faced in the past, such as having to go through design of experiments manually, data generation, model training and so on. So using this newly launched add-on, you can now create AI-based reduced auto models with which you can interactively define signals and parameters from your model and set up design of experiments. You can generate input/output data for training reduced order models.
You can train models such as neural state space, LSTMs, and nonlinear models using pre-configured templates. And then once trained, you can bring those reduced order models into Simulink for the purpose of control design, hardware-in-the-loop testing, system level simulation, and so on. Or you can export the model for use outside of Simulink through functional mock-up units.
So let's start with the data preparation step. In terms of data preparation for training the model, there are various ways to get your hands on the data that you can use for training. You could be using a public data set, or you could be generating data, experimental data, from an actual physical system. Or you could also generate synthetic data from simulation models.
And MATLAB and Simulink offers capabilities that allow you to do so in various ways. And today, we'll focus on the data that is directly generated from the Simulink or Simscape model. And for this particular example in question, we are going to use the Simulink model that contains the FEA component to generate synthetic data for training.
So the first step here is to design a set of experiments that we want to generate data for. And then we then run those experiments or simulations in our Simulink model and collect the data that we need. So let's see how we can perform this using the Reduced Order Modeler app that ships with the ad-on.
You can install the add-on from within the add-on manager once you're in MATLAB. And once installed, you can open up this jet engine blade model, which is one of the examples included with the support package. And the model that you saw is just the high fidelity model that computes the maximum displacement of the blade.
And once you have that model open, the next step is to open up the Reduced Order Modeler app with the model. So let me just continue this. OK, and the first step here, once the app is opened, is to create a new session and specify the signals in your model that you want to vary in your model. So to do so, I'm going to click on-- we can click on Select Signals.
So this will open up the model which we associated it with. And we select the signals here such as ambient cooling and pressure as simulation inputs. And we can configure these as simulation inputs. So you can think of these signals as the signals that you want to perturb or replace in the model so that based on these perturbations, you can then collect the corresponding ROM input and output data.
Next here, we can go in and select these signals as ROM inputs. And these are nothing but these filtered versions of those initial signals just so that they serve as these physically realistic signals getting into this blade. And then once selected, it shows up back in this configuration window, where you can configure them as your reduced order model inputs. And then finally, you can select the maximum displacement signal and specify that as your ROM output.
So here, you can select that as ROM input. And this is how you go about configuring the signals from your model and specify them as simulation inputs, ROM input, and ROM output. So once the signals are set up, the next step is to design our experiments-- that is, to let the tool how you want to vary the simulation inputs to gather the data from the model.
So here, let's click on the new experiment. And let's leave the-- once it's open, let's leave the signal injection mode on the right to replace, and we'll leave the default setting where it generates random pulses. And here, you have the options to specify the pulse width and the number of pulses that you want to vary. That is indicating the signal length that you want to generate randomly.
And as you can see in this configuration window on the right, you also have the capability to enter the minimum and maximum values for each of these signal types-- that is, minimum and maximum ranges for ambient temperature, cooling and pressure. And on clicking Apply, the plots in the between get updated, which is a scatter plot, which gives you a sense or which lets you visualize the input design space of what are we exciting the model with in terms of the data. So this visualization helps you kind of get a sense of if you're generating enough data to cover the design space to capture the essential dynamics,
And in this case, what we'll do is now that this experiment is set up, we'll repeat the same process to create an other experiment just so that-- just to make sure that we have enough data that we capture to run these experiments. And we're just going to repeat the same process and apply these signals. And now once the experiments are loaded, the next step is to-- you have this place where you can select the simulation options, where you can enable running these simulations in parallel to perhaps speed up the data generation process.
And then you can click on Run Simulations, where it's going to run each of these created signals in your Simulink model and collect the corresponding ROM input and output data. In this case, I've skipped the data generation process because it's a time-intensive process. But instead, I'm showing you the collected data directly, and this is what the tool would look like after the data is collected.
You now see that the top three plots here correspond to the ROM data that we have collected, corresponds to the ambient temperature, the cooling input, and the pressure input. And we've also collected the corresponding maximum tip displacement that our FEA model put out corresponding to these input values. So we now have the collected generated data from the synthetic model that we could use for training these reduced order models.
And on this note, I also wanted to make another point that while in this model, we didn't have a notion of parameters that we wanted to vary, you also have the option to vary parameters in your model just so that you can make the simulation valid across a bunch of different operating conditions so that you collect corresponding data across different initial conditions so that you finally end up with a data that's valid across multiple operating conditions and then use that for training your model.
So please be aware that we do have capabilities for varying parameters to cover data across varying operating conditions. So this covers the data generation step and synthetically generated data that we could use for AI modeling. So the next step is getting into the training.
So going back to the data-driven approaches, we briefly mentioned that we have AI-based data driven approaches. Within that, there are two sections. We have static methods and dynamic methods that you can leverage.
With the static methods, these include methods such as lookup tables or curve fitting and related techniques. Dynamic methods include deep learning methods such as Long Short Term Memory networks or LSTM networks, neural state space or neural ODEs, or even system identification techniques such as non-linear ARX models. Today, we will focus on some of the dynamic methods listed here, but be aware that we have tools for static methods too, and please feel free to reach out if you need more information.
So, focusing on the dynamic method, for the first example, we are going to use an LSTM network to be able to capture time dependencies in our data. So, a long short term memory network, just a primer to what this method is, is a type of a recurrent neural network that can learn long term dependencies between timesteps of data. And as previously mentioned, the input here that we're going to use is going to be ambient temperature, cooling temperature, and pressure, and the output to this network is going to be maximum displacement.
So let's now see how you can go about training this model in the reduced order modeling app. So back in the app, now that we have the data collected, we now see that the templates for training, model training, are now active. By templates, I mean these templates at the top that show LSTM network, nonlinear ARX, and neural state space.
So for this first case, let's click on the LSTM network. Clicking on it asks here a dialog which says which data you want to use or which data do you want to use for training the model. In this case, I'm selecting both the data from both the experiments to be carried over to the Experiment Manager.
And here, all the data is imported automatically behind the scenes. So you don't have to worry about including the data or pre-processing the data. And here in the Experiment Manager, this lets us train different AI models by sweeping through a range of hyperparameter values, and it also allows us to evaluate, compare performance of each of those trained models.
So here, let's specify different hyperparameters specific to our LSTM model, that the sample rate that's required is about 0.2. We can then go about specifying the hidden units in LSTM layer, somewhere between 20 to 80. We can then mention the initial loan rate to a couple of values to see which of those give us good
Results. And finally, we can leave the number of LSTM layers in the network to the default values. And once these hyperparameters are selected, you can then select the training mode to simultaneous. And what that would mean is that that would enable your model training to happen parallelly on all the workers in your system. So in this case, we select the mode to Simultaneous, and we click on Run.
So, running this, we see that during the training process, you see that for models are being trained in parallel, and you see the running symbol for four of these models. That's because I ran it in parallel, and I had four workers active on my local machine. And then after the training process is complete, you will see a final table. So I'm going to speed up this process here.
So you will see a final table just come up in a second, yeah, where you see where each row indicates the value of the hyperparameters that were selected for that particular training of model. And the table also provides the model performance metrics. You can then sort the trained models by test loss in ascending order and pick the model that has low values of both, the test loss and the training loss metrics, and then export the model to the MATLAB workspace. Here, I'm saving that as trainingoutput_lstm.
All right, so here, as you can see, through these preconfigured templates-- in this case, we use the LSTM preconfigured template-- we made it easy for customers who are not deep learning experts to still train LSTM networks for their applications without having to write complicated training loops. Now, for those of you who are deep learning experts and would like to have that additional flexibility to customize, you can get back into the Experiment Manager-- I think this just jumped a few sections.
I'll just try going back. Yeah. You can get into back into the Experiment Manager section and click on this Edit Training script which will give you access and flexibility to make additional changes or customizations as you would like.
Perfect. So just wanted to quickly point out that additional customization and flexibility you have to make additional changes. So this covers the LSTM training side of things.
An alternative approach to the LSTM model could be to create a neural state space model. And here, following the trend, a quick primer to what this model type is-- a neural state space model is in the classical state space representation with the exception that you now have two feedforward networks to learn the nonlinear state functions F and the nonlinear output function G, which will be learned from data. This technique is also popularly known as neural ODE by the deep learning community.
And we're pretty much going to follow the same steps as what we did with the LSTM. So let's get right into it. So here, like in the previous case with the templates active, let's now click on the neural state space model and select data from both the experiments.
And in this case, the only thing that will change here is that I'm going to select Existing Project just so that I can bring up the template and perform the training in the same project where I performed my LSTM training. So you can see that on the left, the LSTM was also already in place. And as we did with the LSTM case, you can go about specifying hyperparameters here specific to the neural state space model such as editing the number of layers to use in the feedforward networks, such as specifying the number of units that you want to use in each of those layers.
And in this case, we can leave a few values to see which of those larger number of units or smaller number of units give us the right results. And finally, you can also specify the sample rate. And once the hyperparameters are set up, you can set the mode to simultaneous to run this in parallel and start the training.
And as with the previous case, I'm going to fast forward this training process. And again, you see your results table where each row indicates the value of hyperparameter that was selected for each model along with the model parameter matrix in the far right of the table. You can again sort the trained models by test MSE in ascending order and pick the model that corresponds to a low value of these mean squared errors in test, in training, as well as the loss.
And then once you've identified that, you can export that to your MATLAB workspace. And here, I'm going to name it as NSS. And again, as previously mentioned, for those looking for additional customizations, you can access the preconfigured template script, where you can change additional settings and make additional changes, such as changes to maximum number of epochs during training or network initialization options or changes to the network in itself. So we do offer this additional customization for all of our templates.
Right, so while in the interest of time, we do have this third model type, which I won't be training here, but I just want to briefly mention what that is. So, third template that we offer is called a template of a nonlinear ARX model. This nonlinear ARX model is a type of a model that has a specific structure and lets you include insights and knowledge of your system into modeling or while training these models, because of which, this model in itself becomes more interpretable in comparison with the other techniques.
So please feel free to try this model out using the preconfigured template for your application. And specifically with the template, the nonlinear function that we support as part of this nonlinear ARX model is a sigmoid network to capture these non-linearities of your system. So this is the third template that we offer.
So, so far, we've seen three different types of dynamic model templates that are available through the reduced order model or app that you can leverage for AI modeling. But we also that the broader deep learning community is incredibly active, and new models are coming out all the time. And please be aware that we support importing models from TensorFlow, PyTorch, and other frameworks so that you have access to those new models that you create outside of our framework too.
But you can bring them into our platform and work with them in MATLAB and Simulink. So we do have this interoperability framework that lets you import these models. All right.
So now, with a few AI models trained such as LSTMs and neural state space, let's see how we can integrate such models firstly into Simulink. We'll see how we can bring them into Simulink. And then see how we can use them for system level simulation and testing.
So let's see how we can integrate these models into Simulink. So we start off by opening a Simulink model here, and opening this model automatically runs a callback script that loads the required data. And the data here includes validation or test data, including the responses of the high-fidelity model. Here, we're loading the response directly because we don't want to run the high-fidelity model. And this data also includes data such as normalization and denormalization that were used as part of the template.
And here, we start with the LSTM block, where we simply provide the exported trained model, if you remember from our Experiment Manager. And we do the same for the neural state space model as well. We just point to the exported model type in the base workspace.
And once these are loaded, next, we simulate the model and compare it with the responses that we're loading from the high-fidelity model. And we see that both the LSTM as well as the neural state space models are pretty close to the ground truth data with LSTM model performing slightly better than the neural state space in terms of accuracy. But there might be other attributes worth considering, and we'll take a look at that in just a few minutes.
So now we have that model integrated into Simulink. And I ran a quick Simulink profiler to give an idea of how fast these models are running in desktop computer, on my desktop computer. So on running the Simulink profiler, I see that the neural state space model took approximately 0.03 seconds for simulation of 5,000 seconds.
So this is the same benchmark that I want to take you back to. If you remember during the problem statement, I said that the high-fidelity model took close to eight hours for running 5,000 seconds of simulation. So now with the trained neural state space model, which is a reduced order model, this model runs approximately a million times faster. That runs the same 5,000 seconds of simulation within 0.03 seconds.
So with the reduced order model available now, we can use this model for performing control design, where the goal of our controller, again, is to adjust the cooling temperature through the blade so that the maximum displacement of the blade does not go above a specific threshold because if the blade displacement gets above a specific threshold, which it might mean that your jet engine blade might be touching the casing of your engine, which could be catastrophic. So in this example, in addition to using the neural state space model that we're using as the reduced order model, to serve as the reduced order model of the jet engine blade, we also use this model as an internal prediction model for the model predictive controllers by auto generating Jacobians and state functions for designing this nonlinear model predictive controller.
While in the interest of time, I'm not going to be covering the workflow to do that, for more information on this, I'm linking a video that talks in detail about how to use neural state space model as an internal prediction model with nonlinear MPC. So please refer to that for more information. So finally, this is what the close loop simulation model with the controller would look like.
So if we specifically take a look within the jet engine blade component, we see that we have two implementations. One that you see now is the high-fidelity model that runs the script for solving FEA model. And one on the bottom is the neural state space model, which is our reduced order model.
So going back to the root level model and on running the simulation, yeah, we want to make sure that the variant we select is the reduced order model. And on selecting that and on running the simulation, we'll be able to see that the controller-- the results should pop up in a bit. Yes, the controller is active, and it's taking an active part in making sure that the respective cooling temperature is reduced so that the displacement doesn't cross a specific threshold. So this way, we were able to use a reduced order model and design a controller to control it and finally have a closed loop system level simulation model.
Then following the system level model, I want to point out that we have a unique code generation framework that allows models developed in Matlab or Simulink to be deployed anywhere without having to rewrite the original model. So automatic code generation also eliminates coding errors and is a great value driver for any organization adopting it, helping you save time while doing so.
So with some deep learning models, especially some of the very large, deep learning networks, there are some specialized hardware that you might be interested in using, things like Nvidia GPUs or some of the specialized processors with CPUs. And for those types of hardwares, you're going to want to use an optimization library. That is to get the best performance out of those.
And GPU coder and related products will connect your models to those optimization libraries and leverage those to generate optimized code. And now listening to you and having heard requests, we learned that using those libraries is not always what you want. And sometimes, you also want to go to a more generic CPU, including ARM Cortex-M and so on. And so we have developed what we call library-free source code generation for these large deep learning models.
So you can use these without an optimization library and then have very portable code that you can run with no library dependencies. So let's take a look at how you can go about generating this library-free code for the neural state space model that we've trained. So here, we see the Simulink model with the trained neural state space model, and we can open up the Simulink Coder app that will help us generate the code we want.
And here, we have a few options available. So we'll just click Next to generate the C code. As for optimizations, our primary objective might be for debugging. So I'm just selecting debugging as an option, and I'm clicking Next. And finally clicking Next would generate code.
And by doing so, we get a code generation report that we can take a look for more information and take a deeper look at the C code. Here, I'll fold all the functions generated just so that it's easier for us to inspect the generated code. The only thing I want to point out is once this step function is expanded, you see that portion of these steps performs steps like normalization and other subsequent operations.
And we also see-- I'm going to forward this here. And we also see this particular for loop that takes different matrix operations that are taking place within the network architecture. So that's how you go about generating this library-free C code with the model in Simulink.
Next, we'll see how you can deploy this deep learning model, which is the neural state space model, to a Speedgoat machine for hardware in the loop testing. And just a primer on what hardware in the loop testing is-- hardware in the loop serves as a final functional test of the algorithm under design before we move into final system integration.
So in hardware in the loop, we usually generate code for both the component or the algorithm, in our case, which is the controller we are designing, as well as for the plant model. The plant code here includes the code of the trained neural network, and that runs on real time computers such as a Speedgoat machine to just mimic the actual blade or jet engine blade in operation. And in case of the controllers, the controllers are usually-- the code of the controllers are generally deployed to run on the target platform.
And here, Simulink can be used to monitor signals and adjust parameters on the deployed models. So to briefly give you a sense of how running hardware-in-the-loop testing looks like and how Simulink could be used to monitor, here is the same example where we are seeing our Simulink model again, and we connect our model with a Speedgoat baseline machine.
And for that I use the Simulink real-time app, which sets the corresponding coder and compiler required to run this. And here, as you can see, I'm connecting it with the Speedgoat machine that is connected with my computer. And on clicking this run target, I'm generating code for my reduced order model and running it on the Speedgoat machine.
Here, I'm also loading the responses from the high fidelity model directly because I don't want to run the high-fidelity model in real time, which it's not possible either way. So I'm just loading the response of the high-fidelity as well, just so that it serves as a comparison to compare with our neural state space model that's running. And that's why you see two curves in this graph where both of them are kind of pretty close in terms of accuracy and that we see that the generated C code is running in real time on a Speedgoat machine. And this is how you go about performing hardware-in-the-loop testing with Simulink helping you monitor those signals.
finally, on top of deployment through hardware-in-the-loop or code generation, we can also use ROMs outside of Simulink for development and operation stages. We've seen the deployment of Speedgoat hardware, which we just saw as part of a previous video. And through use of Simulink compiler, we can use the plant model in our case, which is a neural state space trained model, as an FMU for integration with third party tools as well.
Additionally, if you have trained deep learning algorithms, you can also export it through our ONNX model format for interoperability with other deep learning frameworks. And as we have mentioned previously, you can leverage our code generation framework and code generation tools to generate code for target-specific hardware. And lastly, you can also use MATLAB and Simulink Compiler to create standalone applications, perhaps to serve as your digital twin for your digital twin needs.
And the last thing I want to touch on before we move on is looking back at the couple of AI models that we trained, which is the LSTM and the neural state space models. We can evaluate and make design trade-offs based on specific requirements we may have. Bear in mind that this is just a representative sample of the deep learning and system identification methods that we have used in today's example.
And what I'd like to point out here in this table is that choosing a particular model is not always about one attribute such as accuracy. There may be other factors that you may want to take into account, including training speed, the amount of time it takes to train the model, the interpretability, deployment efficiency that could include inference speed on an actual real time machine, or the model size on having to deploy to a resource-constrained device. So these are other factors that you might want to keep in mind.
And this results that we've listed, it's very specific to our jet engine blade example that we have run out. And each problem will have its own set of AI trade-offs table. So please be aware that these trade-offs are a good thing to keep in mind, to pick a model based on various attributes.
And this is a user story with Subaru where Subaru used AI-based surrogate models to reduce the transmission control system analysis time, where they created a deep learning surrogate model to replicate a third party high-fidelity model and achieved a 99% reduction in the calculation times. And that helped them perform control design in Simulink. So for more information on this, I've added a link to the user story. Please feel free to take a look at them.
So to wrap up, in summary, we have seen how reduced order modeling enables the reuse of full order high-m fidelity models for the purpose of control design hardware-in-the-loop testing that we saw in today's example, but it could also be used for virtual sensor modeling and other applications. And while in that process, we also saw the different AI-based ROM techniques in MATLAB to find out the best method that suits your application. While we only covered AI-based techniques today, as pointed out earlier, we have techniques covering linearization-based and model-based techniques such as non-AI based techniques as well. For more information, please feel free to reach out.
And concretely, while we did that and while we addressed these key takeaways, we looked at how we could use the new add-on that we shipped recently-- that is, with the Reduced Order Modeler app-- facilitating creation of these reduced order models using preconfigured templates, walking you through the workflow all the way from generating synthetic data to model training and importing them back into Simulink for a variety of applications. And finally, we also saw how we generated C code and performed hardware-in-the-loop testing. So with that, thank you, everyone, for attending today.