AI with Model-Based Design - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 55:06
Loaded: 0.00%
Stream Type LIVE
Remaining Time 55:06
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 55:06

      AI with Model-Based Design

      Model-Based Design, the systematic use of models throughout the development process, is a proven way of developing complex systems with efficiency and reduced risks. It has become an integral part of system development in almost all industries and is reaching new levels with the incorporation of AI models. However, designing systems with AI components has its own set of challenges. How can you combine the power of Model-Based Design with the benefits of AI? In this session, you will learn how you can incorporate AI in Model-Based Design and how MATLAB® and Simulink® help address the challenges of AI development for engineered systems.

      Published: 29 Apr 2024

      My name is Tianyi. I am a product marketing manager here at MathWorks. And I work on Simulink and model-based design. Today I have the pleasure to invite my colleague Arkadiy to join us on today's livestream. Arkadiy, do you want to give a brief intro to the audience?

      Thanks, Tianyi. Yes. I'm the product manager at MathWorks. I work on controls products and on deep learning. And it's my pleasure to talk to you about using AI with model-based design. Excited.

      We're really happy about you joining us today, Arkadiy. And we're also really excited about you watching the stream right now. So today we're talking about AI. AI is a really hot topic nowadays. And we know that AI promises good results when it comes to solving tasks that are otherwise difficult or impractical using traditional techniques. And AI can be used to model complex algorithms. It can be used to capture complex non-linear relationships between different inputs and outputs.

      I don't know about you, Arkadiy. But to me, the amazing thing about AI is that you don't need to explicitly encode the algorithm using logic statements or describe the detailed system behavior using first principles or physical equations.

      Yeah, it holds a lot of promise. We'll hear about it.

      In a sense, AI does all of that for us through its learning process, as long as you make some good design decisions on the model and you feed different kinds of inputs to the model of sufficiently high quality and quantity. And given the advances of AI and better results that we're seeing from AI models, there is an increasing need for engineers to integrate AI functionalities into the system that they're building to either enhance or complement the system.

      Yeah, absolutely.

      And these systems often include high-integrity systems that have to work under safety-critical scenarios. And some examples of these systems include cars, airplanes, medical devices, railway systems, renewable energy systems, among many others.

      And there is a set of unique challenges that come with implementing AI into these systems, for example, how do you make sure that the AI component interacts and integrates well with the rest of the system components? How do you make sure that you can deploy the AI algorithm onto embedded resource constraint processors that are often found in these systems? And how do you make sure that the AI functionality will work as intended when you move from the design environment to the operational environment?

      So these are some of the high-level questions that we will explore further in today's live stream. And by the end of today's session, we hope that you can have a better understanding of the different use cases of AI in the context of system design and be more familiar with the toolkits that MATLAB and Simulink offer for you to integrate AI into the system that you build.

      Yeah, that all sounds good to me. Let's jump into it.

      So in terms of today's agenda, we'll first cover the three broad trends that are happening in the space of AI in systems. We'll then walk you through three practical AI techniques that you can start implementing today with model-based design. And then finally, we'll talk about why MATLAB and Simulink are such powerful tools for you to design AI functionality into systems. And without further ado, I'll hand it over to Arkadiy to talk about the landscape of AI in the context of systems.

      Yeah, that sounds good to me. Thanks. I'll borrow your mouse so I can advance for the presentation. OK, so I'm a controls engineer. So I chose this schematic of a control system that might be familiar to those of you who are control engineers as well. And I'll use it to talk about how AI is used to enhance engineered systems.

      As complex engineered systems that MathWorks customers are creating, they drive. They fly. They sail. They provide medical services to humans. As Tianyi said, and if you think about them, a lot of them have a lot of different components. It could be mechanical components, electrical components, hydraulic components.

      And typically, you need embedded software that coordinates how all these different components work. So the software, it provides control algorithms and provides also the higher-level trajectories, the reference commands that our controllers need to follow. And if you think about the typical control system diagram that you see here, when we developing such a system, we have to worry about the device that we're controlling, whether it's an airplane or a car.

      And typically, we need to have a good model of this device so that we can design our control system in simulation and test it in simulation to be sure that when we get to the actual device it's going to work as intended. It's not going to break. It's not going to do something unexpected. And it's not going to endanger humans.

      So there is this component that we need to worry about, an accurate model of the device that we need to work with. And for that, AI can help us. And if the data is coming from the actual hardware, we can use this data to create an accurate model using AI to describe the dynamics of the system that we're trying to control.

      Sometimes you would be using full-order, high-fidelity models that you develop in other tools, CFD, FEA tools. And the smallest ones are very accurate. They're just too slow to be used in system-level simulation that you need to run for overall system design and control system development. So you can take data from those full-order models and create fast-running approximations, reduced-order models. We'll talk about that. So that's about the plant modeling with AI.

      If you think about perception and sensing, of course, we need to know where our car is or what's the attitude of our airplane. And so for that, we use sensors, obviously. Sometimes we use cameras to get the visual indication of where we are. And AI can help us with that.

      So, for example, there is a lot of interest in using AI to supplement or replace hardware sensors that might be too expensive or unreliable. And instead, engineers want to use algorithms that could predict or estimate the values of signals of interest, for example, NOx emissions or battery state of charge things that are hard to measure or expensive to measure this with regular sensors.

      So we'll talk about this virtual sensor modeling use case. And finally, when we're thinking about our control algorithms or the planning algorithms, that's where AI can help as well. And we'll talk about the techniques that's relevant here, an AI technique called reinforcement learning that we support as well.

      So the second trend that I want to talk about is the interest that we see in deploying AI to embedded devices. I think we are all familiar with ChatGPT and large-language models. And if you think about what it takes to train those huge models, that's shown in the upper row of this picture.

      You need big computer farms maybe with a lot of GPUs. And the performance needs to be very high. And there are a lot of variables where you're learning billions of parameters. And this takes a small power plant or medium power plant to provide power to these computer farms.

      For vision-based applications, people have been using GPUs, graphical programming units. And these are pretty powerful computing devices. But what we are seeing is that there is increasing interest in HAI. So when the AI models that oftentimes are smaller don't have as many variables need to run efficiently on resource constrained devices, such as microcontrollers automotive ECUs even FPGAs-- and we have to work with limited computational and memory budgets. And we have to also be power efficient.

      OK, the third trend that Tianyi already referred to is increased focus on regulation and certification. A lot of our customers are developing automotive, aerospace, medical devices, industrial machines, and other systems where it's important to make sure that the systems are safe that they're not going to endanger humans. And in general, there are processes and procedures that engineers follow to make sure that the systems are certified for safety critical use.

      And using AI to design those systems or in operation of the system introduces new challenges. How do we make sure that the algorithms are going to do what we expect them to do? They're black-box algorithms. So we are seeing increased amount of government regulations and standards about AI.

      This is still a very upcoming research field, where there is fast development. But we are keeping track of those developments. And we are providing some capabilities to help our customers verify and even certify algorithms.

      So when it comes to high integrity systems, model-based design has been a powerful and reliable framework for engineering complex systems for almost two decades. And we're seeing the application of model-based design across, as Arkadiy mentioned, many different industries, for example, aerospace, automotive, energy production, industrial machinery, and automation, among many others that you're seeing here.

      And at the center of model-based design is a product called Simulink. So to help you understand what Simulink does, I can quickly take you to the Simulink product page to take a look. So Simulink is a simulation environment for modeling and simulating multi-domain dynamic systems.

      And Simulink supports model based design through two main pillars, modeling and automation. By modeling, you build a virtual representation of your system without relying on costly and time consuming to build physical prototypes. And by performing simulations and analyses on these models early and rapidly in your design process, you can quickly try out new design ideas and use simulations as fast and repeatable tests to guide through your design process to fine tune, optimize, and get your design ready for production.

      And when you're thinking about thinking about production-- and that's where the automation part of model-based design comes in-- so with production code generation, you can translate the model representation of your system into software implementation. And you can use software to target different devices. For example, you can generate C or C++ code to target microcontrollers, generate CUDA code for targeting GPUs, structured text for targeting industrial controllers, like PLCs.

      And you can also generate Verilog or VHDL code to target FPGAs or ASICs devices. So you model your design once. And you have the flexibility of choice to deploy the software everywhere. And we also have a set of verification and validation tools with model-based design to help you detect and fix issue early in the process. So with automation of coding and verification, you can eliminate the manual errors that's usually associated with handcoding and shift your verification activities early and continuously to your design process.

      So since Simulink and model-based design are such powerful tools, it has seen wide adoption in many different applications. For example, we can see that you can use Simulink to design wireless communication systems, develop electrical technology, design control systems, design autonomous and robotic systems. And with the rise of AI, you can integrate AI algorithms such as machine learning and deep learning into system-level simulations.

      And here we can see an overview of the solutions that we have with AI with model-based design. And we have categorized the use cases of AI and model-based design into these three different buckets, virtual sensor modeling, system identification, and reduce-order modeling, as well as reinforcement learning.

      So Arkadiy touched upon all of these three use cases broadly in our introduction. And now I'm going to get more into the first use case, which is virtual sensor modeling. So virtual sensor modeling is using AI models to estimate signals of interest that would be otherwise challenging to obtain using physical sensors or when there is too much cost or complexity that could be added to the design when you use the physical sensor.

      So a good example is what we call battery state of charge estimation using virtual sensors. So the state of charge of batteries is a quantity that is very hard to directly measure.

      That's right.

      It requires elaborate lab instrumentation and setup. So what we can do is that we can obtain the characteristics of the battery from the lab data. And then we use that data to train an AI model. So that it can make predictions when the algorithm is running on your battery-powered devices.

      So if I take my phone out of my pocket-- I'm sure a lot of you have smartphones as well. And you can take a look at the top right corner. There is a battery charge indicator. So right now it's saying I have 60% of charge left on my phone battery.

      And this quantity is most likely estimated by using a virtual sensor rather than directly measured using a physical sensor. And we're seeing this application not only with smartphones, smartwatches, and in a lot of other consumer electronic devices but also with larger-scale devices, like battery packs in electric vehicles. In order to produce an accurate estimate of the range of the vehicle, you need to have reliable readings of the battery state of charge information.

      It's not just an estimate. It's also something that the battery management system uses to make to make adjustments to how the battery operates. So you can sort of close your control loop.

      Yes. So speaking of battery management systems, the AI model does not exist alone. It has to be integrated within the larger context of other software components that are running inside of the system. With MATLAB and Simulink, you can integrate virtual sensor models with the rest of the system model for running simulations.

      So to give you a more concrete idea of what these software systems look like, let me get back to the presentation mode and show you what we have here, which is a closed loop, system-level simulation model of the battery management system and the battery plant model. So we can see on the right-hand side, we have the plant model that describes the detailed physical properties of the battery including electrical and thermal properties, as well as the load conditions on the battery.

      And on the left-hand side, we can see there's the battery management software that has both supervisory control logic and closed loop control logic. And we can see that the virtual sensor model, which is implemented using AI here, produces state of charge estimation, which is critical information that is going to be fed into the supervisory control logic for monitoring and switching between operating states of the battery--

      That's right.

      --to prevent the battery from overheating, overcharging, or overdischarging. And the virtual sensors is only a small application of this broader workflow that we call embedded AI. So you're embedding AI into your system functionality with the ultimate goal of having the AI algorithm running, executing as code on embedded devices.

      So in terms of this overall workflow, we'll first start with data preparation. So with MATLAB, you have domain-specific toolboxes for data cleaning, data analysis, and running data preprocessing, whether it's audio signal data, whether it's time series signal data, or image or video data. And after you have the data prepared, we move to the next stage, which is AI modeling. And we offer flexibility of choice for you to build your AI models, either through the low code AI modeling and training options that we provide through MATLAB apps and toolboxes.

      Or if you and others build or train the model with a third-party framework, like TensorFlow or PyTorch, you can easily import these models into the MATLAB and Simulink environment and then use the model for what's going to happen in the next stage, which is simulation and testing. So in Simulink, we provide blocks for you to integrate your AI models and easily set up your simulations for integration of the AI within the larger system model. And you can also perform validation and verification activities by running simulation based testing, which we will get into just a little bit later.

      And to get ready for deployment, you want to make sure that your model has the appropriate memory footprint and it can run fast on your embedded processor. So this is where you apply many different compression techniques to get the model ready for running on your device. And you can generate code from your deep learning and machine learning models and deploy the code onto embedded hardware. And the orange arrow indicates the different phases where model based design would come in.

      So now I'm going to show you a quick demo of how you can import a third party network into Simulink and set up and run your simulations. So in the MATLAB environment, we can quickly launch the Deep Network Designer app by doing a quick search in the Apps tab. And it's going to bring up the interface of the Network Designer.

      So the Deep Network Designer is an environment where you can interactively assemble the network using layers from the layer library. Or you can see from here that there is an option for you to import networks built in other frameworks, for example, TensorFlow in this case. And we can hit the Import button. And then we can pick a file directory that contains the TensorFlow model files.

      And after clicking on the Import button, it's going to run some analyzes on the third-party model. And it does the conversion process automatically. And after you have gotten your model into the Deep Network Designer, you can easily visualize the structure of the network and inspect the architecture that way.

      Yeah, so feature import layer, a couple of fully connected layers, ReLU. Looks like a fully connected network.

      Yeah. So this is what we call a multi-layer perception model. And if you click on the individual layers, you can get into the details of the specifications of each layer. So this provides a quick way for you to do some sanity checks on the properties of the model. And you can also easily export the model to Simulink.

      Yes. And here, you're showing how you're importing the model, bringing it into Simulink. But if needed, we could also change the model, add some layers, do transfer learning with that.

      Yeah, absolutely. So this provides a very interactive environment for you to quickly visualize, edit, and manage all of your models in one place. So you can save the network into a MAT file so that you can access the model later.

      And here we can see that it has automatically launched Simulink with the predict block created. And we can see the predict block in the Simulink canvas. And then we can add some simple blocks like from workspace blocks for bringing the data inputs that we're going to feed into the AI model.

      Now, it just, like, has our Simulink block. And we can interconnect with our mechanical, electrical components, control system, and so on.

      And we've also connected a scope to the output of the AI model so that we can easily visualize the outputs. And then we can also run some sections from our live script to prepare the input data for running our simulation so that we have the data fully loaded in the workspace. And then we can use from workspace block to access that data.

      You can see here it's providing automatic suggestions based on what's already in the workspace. And here we're also visualizing the ground truth data that represent the accurate measurements that we obtained from the lab for state of charge. And here, I also want to mention that we have developed some amazing technologies for accelerating the execution of AI models in desktop simulations.

      And that requires an additional set of configuration steps. And that's what we're going through here as you're seeing in the video. And after we're done with the configurations, we can specify the data format to help Simulink understand what kind of data we're feeding into the model.

      And now we're running simulation finally.

      And the model compiles. And the simulation runs. And after that, we can double click on the scope block to see what we have here. And we can also turn on the legend to help us distinguish between the estimations provided by the model versus the ground truth data that we obtained from the lab. So here we can see that the blue line represents the ground truth data. And the yellow line represents the prediction values that we have obtained from the AI model running in simulation

      So overall it looks like it's doing pretty well with capturing the trends. Maybe there are some oscillations that we might want to ask our colleagues that train this TensorFlow model to try to improve upon.

      Yes, absolutely. And here we can see that it's actually showing the discharging and recharging cycles of the battery. So after you have integrated your AI model in Simulink, what are the different validation and verification activities that you can perform on your AI model? So here we have a verification construct that's called baseline testing, which is performing unit testing on the neural network using data-driven tests and simulation to help us assert the accuracy of the model.

      So as we showed before, we have the baseline data which represent the ground truth data that we would collect from lab measurements. And we also have the actual simulation outputs. And here you can see some results from running a baseline test that you can create with a product called Simulink Test.

      And here we see that the blue line represents the ground truth data. And the orange line represents the predictions. And what you can do with the test case is to specify a tolerance level. And if your simulation outputs fall outside of the tolerance level, it means that particular point or particular points in time are failing. And on the bottom plot, you can also see that it's also plotting the differences between the baseline data and the simulation outputs. And we can see that everything is within the tolerance range, which means the test case has passed.

      Yeah, of course, this particular band will depend on your application and what your requirements are. But that's all user configurable.

      Yes.

      And this way you can automate the testing, as you said, and quickly see if the tests pass or not.

      Yeah. And then we also have a notion called back-to-back testing. So you would start with running your models with desktop simulations. And as you move further into your workflow, you will generate code from your models. You will run the code on actual embedded hardware. And then you might even want to incorporate hardware in the loop testing with real-time machines.

      So back-to-back testing makes sure that your model behaves consistently across different development stages in your workflow. And Simulink Test provides a convenient setup for you to configure the simulations to run in different modes.

      So on the right hand side, you can see that we have configured the back-to-back testing to run in the normal mode, which is the desktop simulation mode and also in the software in the loop mode, which means that you run the simulation with the generated code from the model. And the bottom right corner, we can see that the test results are showing us that the model behaves essentially the same across your desktop simulation and your software in the loop simulation so that we have more confidence that as we get further down into the workflow the model still behaves correctly and the same.

      Because our eventual goal is embedded deployment. So we want to check that the code that we will be running in the embedded system is doing what the model was doing and meeting our requirements.

      And so we talked about the elements and workflows that we can borrow from the model-based design high integrity workflow. There is another realm of AI, again, the activities that leverage what we call neural network based formal verification methods which is using mathematical methods to assert certain properties of the network. And one aspect of the network that we can test is called robustness.

      So what if you're-- what if you're running the AI model in operation and there's noise in the input that can cause disturbance or perturbation to the inputs-- and will the AI model handle the disturbance in a robust way so that there is minimum deviation on the output that the AI model produces. And we have an add-on called the Deep Learning Toolbox Verification Library that has functionality for you to test the effect of different perturbation levels from the input on the output.

      So we have a function called estimateNetworkOutputBounds that can help you assert what's going to be the maximum amount of deviation that you will see from the AI model's outputs. And another idea of the formal verification that we can run on the AI model is what we call out-of-distribution detection. So the name is a little bit complicated. But the idea is very simple.

      So for a very simple example, when you train an AI model to classify pictures of cats and dogs in training process-- so the AI has learned that very well. And what if in the operational environment there is a picture of a bear that slips into the mix. How do we build a mechanism that can flag this kind of input as being significantly different from the inputs that the AI model has seen in its training process?

      It's important for safe operation.

      And there is functionality that we can leverage, again, from the Verification Library for you to build safety guardrails around the AI model in case the model is exposed to data that it has not seen before in its training process so that we can either have humans intervene in this process or we can have another system that cross-checks or cross-references the results produced by the AI model.

      So after you've performed verification on the AI model, it's time to think about deploying the model onto real hardware. And there are some practical challenges when it comes to deploying AI onto edge devices. For example, we have limitations, as Arkadiy mentioned at the beginning, on the memory, such as RAM and ROM of these devices. We have limited raw clock speed, typically in the megahertz and in the low gigahertz range. And these devices often have limited power consumption.

      But there's also a set of strong needs for us to keep the high accuracy level of the AI models when we run it in inference. We want to maintain good throughput of the AI model at the same time keeping the memory footprint of the model small. So here is where compression comes into place to bridge the gap when we go from simulation testing the model to deploying the model onto hardware devices.

      And we have a set of techniques for us to reduce the model size, speed up inference, and also maintain the accuracy level of the AI model. And we have two broad categories of compression techniques. So one is what we call structural compression. So this is looking at the structure of the neural network and try to analyze the network and take out the parts the different weights and connections that are least significant in terms of contributing to the predictive power of the AI model. So that helps reduce the complexity of the neural network.

      And the other type is what we call data type compression. So this is not changing the structure of the model but looking at the data representation of the weights and parameters in the network. So we can use smaller and more efficient data representations for storing the weights of the network. So we can have-- so we can apply quantization techniques and fixed point conversions on the models.

      Actually, Tianyi, I'll just add that there are also different types of models. And different architectures will create different models in terms of how computationally intensive they are. And so these are tools that's easy to try and compare different models. So you show them porting a network. But if you're training in tools, you could train an LSTM. You could train fully connected network. You could train maybe a support vector machine.

      And also sometimes, you need to know if AI is a best fit. So for this virtual sensor application, Kalman filters are very popular. So you might want to compare what are the benefits that AI is bringing in comparison to a Kalman filter, how much more computationally intensive it is. And it's also easier to run these comparisons with our tools.

      So there are a lot of factors that come into play when it comes to model selection, when it comes to applying different compression techniques. And the design space is huge. And MATLAB and Simulink provide tools for you to fully explore all the different choices within that design space and help you gain the most performance out of the models that you build, whether it's AI or non-AI models.

      Right.

      And back to data type compression, so we have a data type that's called bfloat16. So what we have come to realize is that we don't need the full 32-bit floating point representation for storing the weights for the models to be effective and accurate. In fact, for deep learning models that are resistant to precision loss, we can use smaller data types. For example, we can truncate away the 16 bits from the full 32-bit floating point representation and have the model size almost instantaneously cut by half.

      And with large language models that have a lot of parameters, industry is starting to adopt even smaller data types like 6-bit or even 4-bit floating point representation. So this is an example of the effects that you can see on the AI model after we have applied the compression techniques. So this is an LSTM model.

      And we can see that by progressively applying structural compression as well as data type compression, we are still maintaining the same level of accuracy that we're seeing in the model. But we have significantly reduced the size of the model. And we've also reduced the inference latency, which in other words we've significantly sped up the model when we run it, when we run the code on embedded devices.

      So after you've compressed the model, it's time for us to generate code to target hardware. And with MATLAB and Simulink, you can generate plain C++ or CUDA code to target any CPU or MCU or NVIDIA GPU devices. And the big advantage of that is that you have flexibility to deploy the code onto any device family or device type. And in the case that you have some specific targets in mind, we also offer integration with optimization libraries for you to even gain more performance out of code running on these specific targets.

      I don't know what happened there, Tianyi. But you were not showing FPGAs. We can also generate HDL code to target FPGAs, including--

      Yes.

      --for AI models

      Yes, that's another option, if you're looking at FPGA or ASIC devices. And we also want to show you what it looks like when you run the AI model as code executing on the actual embedded processor. So this is what we call processor-in-the-loop simulation.

      And I have a quick demo video to show you how it works. So we have functionality for you to manage your simulations. So you've deployed your code. And it's running in real time on the actual embedded processor. And on the top right hand corner, you can see that the model is producing estimations. And it closely tracks the reference results.

      And what is happening is that there is a connection being established between the embedded processor and your desktop MATLAB and Simulink environment. So you can use this connection to monitor the execution status of the code on the processor. And you can also obtain artifacts, like profiling reports, from executing the code so that you can have more insights into the performance of the code.

      OK, thanks, Tianyi. So we'll talk about some other use cases here, so probably not in as much detail. But if you remember, when I started by showing this picture of a control diagram, we talked about virtual sensor modeling. That's what Tianyi focused on now.

      There is also this use case when we might want to develop an accurate model of the systems that we are controlling using the data that we collected from hardware or data that we have from slow but very accurate full-order model. So that's what we're talking about here now.

      So the second use case is AI-based system identification and reduced-order modeling. And the difference is only where the data is coming from. If it's coming from the actual hardware, we are referring to that as system identification. If the data is coming from full-order model, then we're referring to this as reduced-order modeling.

      Of course, system identification is an area of engineering that has been around for a long time. MathWorks had for ages system identification toolbox. What's new now is that there is AI. And so we can use AI to develop better non-linear models.

      OK, so where can you use these models that you develop? What can you use them for? You can use them to run system-level simulations. You can take slow full-order components and create models that can run in real time for hardware-in-the-loop testing. You can use these models as internal prediction models for Kalman filters, non-linear Kalman filters, or non-linear controllers, such as non-linear model-predictive control.

      And, of course, for virtual sensor modeling, the data that you use to train the model could be coming from full-order model. And then you're into embedded AI workflows that Tianyi talked about. Here's an example of our customer, Subaru. They were using Simulink to design a control system for automotive transmission.

      They had a detailed full-order 1-D model that they built in another tool, not a MathWorks tool. And this model, while it was great and accurate, just wasn't running fast enough for the system-level simulation and control design they needed. So they used MathWorks AI tools to create a reduced order model that was AI based. And this model was running 100 times faster than the original full-order model and accurately reproducing the signals of interest. So that was great.

      So I'd like to quickly walk you through the demo here. OK, in this case, we are developing-- imagine that we are developing a closed-loop temperature control for a jet engine. And this temperature control system needs to maintain the optimal turbine blade clearance between the tip of the blade and the turbine case. If the clearance is too big, then the turbine is not operating efficiently. If the clearance is too small, then they are potentially rubbing into the case. And that's not good.

      So we're using the cool ambient air to control the amount of cooling that we provide to the blade. And so we need to have a good model of how the maximum displacement of the blade tip will change depending on different conditions. And we'll use this information to run system simulation and design an advanced non-linear model-predictive controller.

      So in our Simulink models that we are using, we are specifying the ambient conditions. We will be designing our controller. And we have this full-order high-fidelity third-party tool FEA model that computes maximum tip displacement.

      Now, if you think about this full-order model, it's doing the computations that are based on partial differential equations to compute the temperature profile at each point in the mesh. And this is a very intensive set of computations that we sped up here just for illustration. But it's actually not anywhere close to running at the speeds that we need for system-level simulation.

      So this model, again, could be coming from a third-party FAE, CFD tool that you could be bringing into Simulink through functional mockup unit, FMU or FMI. So in this particular case, this full-order model is taking 30 seconds to compute just one time step. And so it's definitely not suitable for control design for hardware-in-the-loop testing.

      Now, we are not interested in this full 3D temperature profile. We only want to know how the output, the maximum displacement, is changing as a function of input signals, ambient temperature, ambient pressure, and cooling temperature. So what we will do is that we will use a tool called Reduced-Order Modeler app to do design of experiments, collect the data from the full-order model, and train AI-based reduced order model that will run fast enough for system-level simulation and control design.

      So the workflow includes doing design of experiments and collecting data, training AI models, and bringing those AI models into our simulation environment, as we've seen here, and potentially also using these models for embedded deployment hardware in the loop testing or even exporting for use outside of Simulink using FMU Export.

      OK, so we'll do that using a new pretty new app that we have called Reduced-Order Modeler that's available to download on this page that you see here. This app helps you go through the stages of design workflow, run design of experiments, generate data to train the models, train and compare different types of AI models, and export trained models for use in Simulink or even outside of Simulink.

      So let's take a look at a demo very quickly. So we'll start this app. We'll open a new session. And then the session here will specify the signals in the Simulink model that will be the input and output signals for our reduced-order model.

      So in this case, the ambient temperature, cooling temperature, and pressure are signals where we will inject the excitation signals to generate data for training an AI model. The signals that will be inputs to a reduced-order model are filtered versions of the signals. And the output for the reduced-order model is going to be the maximum displacement.

      So we'll go and select this maximum displacement signal now. And we'll say that that's our output signal of interest. So now that we specify which signals we are going to be working with, next we will specify the excitation signals to inject into the model. So we will go and create a new experiment where we will specify the excitation signals. So let's go ahead and do that.

      The tab in the app is opening up. Here we are specifying whether we want to replace or add an excitation signal. The signal profile, in this case, random pulses-- we're specifying how many pulses, what are the amplitudes of the signal.

      We can visualize the coverage of our design space. And once we're happy with that, we can run simulations. And we can run them in parallel to speed up simulations to collect the training data. So we're going to fastforward through that part where we're running simulations and just show you the results.

      And so here is the data that we collected. And we are now ready to move on to the next stage, which is AI modeling. Now that we have this data, we can try estimating different types of AI models that you see here on the top.

      And so in this case, we'll try to fit a neural state space model. You also might be familiar with that as a neural ODE model. And for me, I'm familiar with state space models. Here it's a variation of those models, where instead of fixed matrices, A, B, C, D's, we're using neural networks to provide updates to state derivative and the output. And we'll train this neural networks from data.

      OK, so here's a quick demo of how that works. We're going to select neural state space as a model type. We specify which data that we collected we will use for training. Click OK here. We open this part of the app that's called an Experiment Manager. And it comes with preconfigured template for different types of models that we might want to train, so including this neural state space model.

      What we're doing now is that we are specifying different hyperparameters that we want to try here, the number of flags, the number of layers that we might want to try is, the number of neurons in the layers because we really don't know what combination of hyperparameters will give the best results. And with this tool, we can easily and interactively specify these different combinations to try. Then we run all these different combinations for training simultaneously using parallel computing again.

      And a while later, we get our training results. We can lose the training loss and validation loss, select the model with good loss numbers, and bring it into Simulink for simulation and for control design. So that's the quick demo of Reduced-Order Modeler app.

      Let me move on and, I think, in the interest of time just briefly mention reinforcement learning, which is an exciting capability that a lot of our customers are interested in. Reinforcement learning is a different type of AI. So the previous examples of AI that we talked about, those can be classified as supervised learning. We have the training data. And we train an AI model to provide accurate regression or classification decisions based on the training data.

      Reinforcement learning is different. It's not a supervised learning approach. But instead, we're trying to train an AI model to achieve a desired outcome. And it's doing it by repeated trial and error interactions with an actual device. But that's maybe dangerous. So practically, we recommend starting with a simulation model. And what better way to create a simulation model than Simulink?

      So here you can imagine that you can model a robot, the mechanical part of the robot, the motors that are driving the joints. And you want to train a complex control system that you could go and design using traditional techniques of course. But you could try to use the power of AI to train a control system, AI based, that will make this robot walk. And it will just learn, like small kids do, by trial and error.

      So for that, we have a product called Reinforcement Learning Toolbox. This product provides built-in and custom reinforcement learning algorithms. It lets you interact this interface, this reinforcement learning agents, this environment models that you can build in MATLAB and Simulink.

      You represent the policies that make up the agent using neural networks from Deep Learning Toolbox. You can accelerate the training by parallelizing it with parallel computing and MATLAB Parallel Server. And we provide a nice app that helps you go through the reinforcement learning workflow.

      Once the policy is trained and is meeting your requirements, you can deploy it to the embedded production systems using the capability as Tianyi walked you through. And as the questions that we was asking, there are also multiple examples that show you how to use reinforcement learning for control, for automated driving, for robotics, for decision-making tasks. So I encourage you to take a look.

      The workflow for reinforcement learning includes modeling the environment, formulating what the reward is, that is, how well our agent is doing, what we want it to do, specifying the agent, training it, and then testing and deploying it. And we provide tools that help you go through all the stages that we've seen here.

      We have several examples of customers that have successfully used our reinforcement learning capabilities to solve exciting engineering problems. Krones is using reinforcement learning to develop a machine that's doing molding to create bottles for beverages. And my personal favorite is Max Planck story, where reinforcement learning was used to design a complex controller in the system that's detecting gravitational waves.

      Now, briefly I'll mention that there are many other use cases for AI. We just touched the tip of the iceberg. So AI can be used for anomaly detection in your control system. You can detect that something is off. Maybe you need to reconfigure and use different sensors because the sensors that you have are failing. Or you can use anomaly detection for predictive maintenance to maybe schedule a visit with a technician to service your machine.

      We already talked about vision-based applications, where AI can be used for object detection, for object recognition. And we talked about reinforcement learning as just one technique for AI-based control. It doesn't have to be reinforcement learning. We have features and capabilities and examples that show you how you can train neural state space model from data and then use this model as internal prediction model for model-predictive control.

      And for all of these use cases, we've put the links in the chat so that you can explore further on your own.

      That's right. That's right. That's right. Thanks, Tianyi. And you're probably thinking, why should I use MATLAB and Simulink for this? And I think as we were trying to make it clear, you can train AI models with our tools. You can get this AI models from colleagues who train them in TensorFlow or PyTorch. It's all good.

      What the value is that we are providing is that you can integrate the AI models into complex, multi-domain engineering designs that you typically would use model-based design to develop. And this way, you can understand how this AI components will work in a larger, more complex system that you are designing.

      We're also trying to think about specific engineering applications of AI, such as predictive maintenance, reduced-order modeling, visual inspection, and think about all the different steps that you have to go through and package it in easy-to-use use apps that walk you through all the stages of the design. I quickly showed Reduced-Order Modeling app that we have. We have similar capabilities for visual inspection for predictive maintenance. And we are happy to educate you more about those offline. We don't have time today.

      We help you take this AI models to deployment, so the small compression, verification, and cross-platform code generation capabilities. And I think it's pretty powerful that you can take the same model and deploy it to a microcontroller. Or then you change your mind. You can have a training on FPGA in no time. And as we said, you can train this AI models with our tools. Or if you prefer, you can bring models from TensorFlow and PyTorch and use them with our tools as well.

      All right, so finally, where can you learn more? So Tianyi showed you this page that summarizes this use cases, provides useful links. They're also available, I think, to you now. You can go and look at the detailed seminars or webinars, the video recordings for all these different applications, virtual sensor, reduced-order modeling, reinforcement learning, data-driven control, visual inspection, and others. We just don't have time to go into all of these applications in detail. But you can on your own after this livestream.

      So hopefully this was useful. Thank you very much for watching. Thank you very much for asking questions. And there are tons of resources that we can provide to you to learn more. We are happy to talk to you. Our engineers are happy to talk to you. So please reach out. And we are happy to work with you to help you use AI to create better engineered systems.

      View more related videos