Accelerate GT-Power to Simulink Model Order Reduction on the Cloud - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 23:54
Loaded: 0.69%
Stream Type LIVE
Remaining Time 23:54
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 23:54

    Accelerate GT-Power to Simulink Model Order Reduction on the Cloud

    As an integration platform, Simulink® can cosimulate with many third-party tools, including GT-Power. Learn how you can generate a reduced-order model from GT-Power in Simulink. Furthermore, with preconfigured cloud resources, you can accelerate the process significantly by executing the simulation on multicores in parallel.

    Published: 10 Jun 2022

    Hello, everyone, and welcome to our session about accelerating a model order reduction process using cloud-based parallel computing. My name is Brad Hieb and I'm an Application Engineer out of our Detroit-area office. And joining me on this presentation today is my colleague, Winston Yu.

    So here are some key points to keep in mind during our talk. First, you can use Simulink as an integration platform to make it easy to reuse existing modeling assets from a variety of different sources to simulate complex systems. Second, you can use MATLAB's capabilities to easily scale up to running multiple Simulink simulations in parallel on the cloud. And third, the combination of Simulink's integration platform capabilities, along with the ease of scaling to large parallel runs on the cloud, makes it easy to manage the trade off between simulation speed and fidelity.

    So over the next 20 minutes or so, Winston and I are going to cover using Simulink as a simulation integration platform, so that you can simulate with other tools. We're going to talk about how to perform a model order reduction process to convert a GT-POWER engine model into a Simulink mapped engine model, how to use parallel computing on the cloud to accelerate that model order reduction process and, finally, how to use MATLAB to make it easy to scale up to cloud-based parallel simulations.

    So let's get started with our first topic, co-simulating Simulink and third party tools. Simulink provides a variety of techniques to integrate or co-simulate with third-party tools and models. It offers standards based interfaces, like the FMI, or Functional Mockup Interface. It has an S-Function interface that's been around since the inception of Simulink. And it has a variety of techniques to make it easy to integrate with custom C or C+ code.

    So our vision is that, with Simulink, you can bring together all of your design components, no matter where they originated from. And then when you do that, everything should just work. So for example, I can co-simulate a Simulink virtual vehicle model with a high fidelity GT-POWER engine model, to do things like virtual calibration studies or maybe even engine controls validation.

    Now, for those of you who haven't heard of GT-POWER, it's a very detailed, high-fidelity engine design tool used by virtually all companies that design and build internal combustion engines. Suppose I need to use this virtual vehicle model to run hundreds or maybe thousands of drive cycles simulations. And if I use the detailed, high-fidelity engine model, even if the model runs at real time speed, each of these simulations could take 30 minutes or an hour to run. So how do I avoid waiting days or weeks to get my results?

    Well, if I don't require the fidelity of the detailed engine model, I can use model order reduction to convert that detailed engine model into a lower fidelity but faster running model, to improve my simulation speed overall and then, therefore, get my results faster.

    So let me show you, now, how we generated a reduced-order engine model from that GT-POWER detailed engine model. So what we did was set up a co-simulation between an engine dynamometer model, that we ship with Powertrain Blockset, and a GT-POWER model. And we use one of the built-in experiments from that engine dynamometer model to perform the model order reduction process, which then converts the detailed GT-POWER model into a fast-running, mapped engine model.

    So what is a mapped engine model? Basically, it's a representation of the as-calibrated, steady state engine behavior, in terms of response services, like torque production, fuel flow, exhaust temperature, engine out emissions, things like that. Now, these responses are all functions of the engine operating point, which is defined as the commanded torque at a given engine speed, which is why I call them response surfaces. And you can see a picture of that, here on the right hand side of the slide, where we've got these 2D tables here, all indexed by, in this case, torque command and engine speed, and then providing the outputs, like I was mentioning-- torque, airflow, exhaust temp, things like that.

    Now, this type of model is relatively easy to parameterize. It's very accurate and it's very fast. And all of this makes it very useful for system-level simulation activities, like, maybe, fuel economy and performance studies, powertrain sizing, or maybe even shift schedule optimizations. Now to convert a high-fidelity GT-POWER model into a mapped engine model, what I do is, first, exercise that GT-POWER model at a bunch of different engine operating points.

    All right, and then at each of those operating points-- record the steady state engine responses. And then format that steady state engine response data, at each of the operating points, into these 2D response surface maps, or basically lookup tables, which are then plugged in to the mapped engine model. Now, if I ran this model order reduction process in a serial fashion on my laptop-- which I have-- it takes about three hours to complete. But if I run the same set of simulations in parallel on the cloud, I can do that in about 10 to 12 minutes.

    So let's look at how we sped up this model order reduction process, using parallel computing on the cloud. Now, there could be a lot of reasons for using cloud computing. But some typical ones are things like, I just don't want to tie up my local computer, or I have some special hardware and I only need to use it once in a while, so maybe it's cheaper to rent it versus buy it, kind of a thing. And sometimes, the data you need is already on the cloud. So it's more efficient to move the computations, or in this case simulations, to where the data is, rather than shuffle the data back and forth.

    Now, one of the key values MathWorks offers is the ease of transition from desktop execution context to cloud-based execution contexts. In that case, you don't have to rewrite your algorithms or your scripting or anything, because that'll work. If it works on a desktop, it's going to work on the cloud. And you don't have to worry about if your OS is covered, because we support both Windows and Linux.

    So let's take a look at how we use these capabilities for our model order reduction process. So what we did is used what's called a MATLAB reference architecture to set up and instantiate the cloud computing resources that we needed to run that model order reduction process, in parallel, on the cloud. Now you can think of these reference architectures as essentially a recipe that defines and ensures that all the software you need to run your tests will be installed on these virtual machines. And once you do that, you can then connect to these cloud-based machines via something as simple as just a remote desktop.

    And then, what this allows me to do is take my model order reduction simulation setup from my local desktop machine and then transfer it to the machine on the cloud, that I can use, then, to speed up my simulation runs. I chose a machine with 32 physical cores, which, in my case, worked really well. There's a lot-- there's a ton of different machine types available. So you can select the one that's best suited for your particular needs. This approach supports all the major cloud service providers, like AWS and Azure.

    So let's take a look at what this actually looks like. OK, so here's my remote connection-- a remote desktop type connection to my Amazon Web Services cloud machine that I've got set up. So I'm using basically a web app from Amazon-- AWS-- called Nice DCV. But it's basically a way to log in to the remote machine on the cloud.

    So let me do that. I'll hit the Login button, here. And that's going to take me to the machine that I've set up. This one happens to have 32 cores, right? And I'm going to put this in full screen mode, now, so it makes it a little bit easier to navigate.

    So let me bring up-- I have the Simulink model already opened and configured. So this is the same Powertrain Blockset, the engine dynamometer reference application that I was mentioning in the slides a few minutes ago. And so let me show you where we connected to the GT-POWER model. I'm going to take you inside of the engine subsystem here.

    So I have a tab for that. So now I've drilled in a few layers in. And you can see that what I'm doing is, essentially, I'm collecting up all the input signals that I need, routing them into this subsystem here. If I go inside of that, you'll see this block. This is a block from GT-POWER, which is what sets up the co-simulation between Simulink and the GT-POWER model itself. And so that's the piece-- that's the key piece that makes this work.

    All right, now, having said that, what we're going to do is actually run the experiment that's going to automate the model order reduction process. And I do that just by double clicking on this block, here. And it's going to run a callback script. So I'll do that now. So that's getting that started.

    And while that's happening, let me bring MATLAB up. OK, so here's MATLAB. And I'll just slide the Simulation Manager. We'll get to him in a second. OK, so you can see that there's some initialization happening. There's some output being fed back to the MATLAB command window.

    But while that's going, I wanted to take you through the code that is actually being executed When. I double clicked on that green box on the model. This is the code that's been called to perform the model order reduction process. So effectively, what I'm doing is, I get my torque and speed commands that I want. So I have a speed command vector and a torque command vector. Bring those in.

    I figure out how many sims that's going to be. And then I pre-- I use that to pre-allocate the number of simulation input objects that I'm going to need to run my experiments. And these simulation input objects are used with parsim. So let me go down here and explain a little bit more about those, before we get to that.

    So what I do then is, I have a nested loop here-- a pair of loops. The outer loop being, I think it's the number of simulation speeds, and the inner loop being the commanded torques. And then I use some properties of these simulation input objects, in this case set.BlockParameter, to set the individual values of a constant block within the model to the value of the individual speed that I want.

    And that's what this one does. And I do the same thing for the commanded torque that I want. And that information that is stored inside the model and it sets up a particular run. I also use the same kind of approach to turn off some performance monitors that run during at the end of each simulation, just to save time. I'm using this in a headless mode, so I don't really need to watch those after the end of each run. So I turn those off.

    And I also have a post-sim callback function that you can-- it's an optional thing that I added, just so we could retrieve the GDX files from the GT-MODEL. So if we want to do some data post-processing from that afterwards, we could. So once that is all set up then, effectively, I just call pass parsim, pass in the input object array.

    There's going to be-- what did we have? 225 of these that are going to be in here. And then there's a series of options that I provide to attach files to each worker, show the progress on the command window, and then turn on the simulation manager, which I alluded to earlier. So let's bring the simulation manager back to the foreground here. And here we are.

    All right, so go to this tab. So this is super handy for monitoring and troubleshooting different aspects of the parallel simulation while it's running. So what I'm showing here, on this tab, is all the queued simulations. There's a box for each one of the model operating points that I need to run. And the blue boxes are queued simulations.

    And you probably saw here, just real quick, these black windows. So these windows are popping up. I've got a pair of monitors signal-- monitoring, I should say, two signals in the GT-POWER model and I put them up on these displays. So when they start running, I just did that so I could see that the GT-MODEL was running correctly and I didn't mess something up. So those will start going here, in just a little bit.

    And one other thing I want to call out as I go down here, I hover over this little icon in the lower left of the MATLAB command window. You can see that I'm using a parallel pool that has 32 workers in it, which corresponds to the machine I have. And that's why, if you counted this, there'd be 32 of these queued blocks at one shot. And that's how many sims will be running at the same time.

    OK, so we have just a little bit more time-- a few more seconds, or maybe 20, 30 seconds more or so for this to finish initializing and getting all the simulations running. Oh, in fact, actually they have now one. Of these has actually completed. So, good, I can actually now show you another thing that's kind of handy about this.

    I can add another figure window here, if I click this button. And what this will allow me to do is monitor the progress of signals that are being pulled back from the individual workers as the simulations progress. So here, I've got a surface plot where I've got the commanded engine speed, here on this axis, the command and torque vector, here.

    And then I'm going to put, on the vertical axis, the actual measured engine torque that's being generated by the GT-MODEL. So let me go down here and find that signal. Then, let's see, there it is, this guy right here. And I can give it a label. Let's see, this is the measured engine torque. Ah, M-E-A-- something like this.

    OK, so what's handy here is, I can look at the individual signals-- in this case, the engine output torque-- and what I'm looking for is any kind of anomalous behavior. If I see big spikes or something crazy, that might be a prompt for me to halt the simulations and do some troubleshooting, before finding out at the very end that something blew up in the middle, and having to repeat the whole thing again.

    So at this point I'm going to stop talking. It takes about 10, 12 minutes for this whole job to complete. So I'll stop speaking and then I'll just accelerate the playback a little bit, so we can get through that in a few seconds, rather than waiting 9 or 10 minutes. And then I'll rejoin after it's completed.

    OK, I'm back. You can see the simulation job has finished. And I'm getting some default plots that I've set up as part of the model order reduction process automation. So I've got-- all of the data has been retrieved from each of the workers. So all of the individual runs are all compiled in this one big composite data set. And I can look at the various-- in this case, a bunch of signal data, in simulation data inspectors. So if I wanted to go in and look at individual runs and the details of all of the signals, I can do that.

    And then I've got this summary plot that shows-- you can see the little blue squares, here. Those are the individual simulation results that I got back and those represent-- each of those little squares represents a data point at a particular engine torque speed operating point. So also-- I can get this out of the way.

    If I go back here to my configuration-- I'm sorry, my simulation manager. If I look down at the bottom, here, you can see completed 225 sims, in this case, I think what, 12 minutes 41 seconds. So that gives you a summary result. So at this point, I'd like to transition back to the slides.

    And what I'd like to do is transition to my colleague, Winston, who's going to take you through some of the details of how we set up these cloud computing resources on AWS and give you some options for doing that. So, Winston, take it away.

    Thank you, Brett. I'm Winston Yu. I'm going to introduce the IT infrastructure for running this kind of simulations for running on the cloud with MATLAB and the Simulink product with the multiple approach. For example, very popular one called MATLAB Online, it is a MathWorks hosted on Cloud Infrastructure allows user to run MATLAB through a web browser.

    Similar approach can go through a product called MATLAB Online Server, which is running the MATLAB inside the web browser on private cloud. So instead of running everything on mathworks.com, it will run under your domain names and network, use your Azure or AWS account to set all your data become private owned. Or also it can working on on-premise environment, through a Kubernetes cluster.

    For running this kind of co-simulation, since we need a third-party software-- so the easiest way is just to run VM with multi-core AWS or Azure Cloud. And how to do that? Usually we will go to a product that MathWorks provides called MATLAB's Reference Architecture. MATLAB reference architecture is a GitHub repository hosted by MathWorks.

    To access MATLAB reference architecture, you just need a simple Google MATLAB reference architecture in the browser. Then, you can access the developer hosted by MathWorks. We will choose matlab-on-aws, in this case. And this is a reference architecture landing page.

    Scroll down, you will find the different type of the MATLABs running on vmImage, hosted by a MathWorks. Here, we will just a select Windows 2020a reference architecture. When we go inside of this, there are a bunch of links in different AWS origin. When you click this button, it will launch a CloudFormation template, so you can fill up all your details regarding this future VM.

    First area we need to type is your Stack name. I will suggest you using something unique. Also, same things apply the Instance names. Then, you will need to type the machines type. This is m5.xlarge large machines. You will see this is focal machine with 16 gig memory. Then, you need a set up remote access.

    A simple what is my IP search on Google will find out your IP address. So let me just put my IP address over there. Then, you need to select a key, which is your own identification regarding who is the owner about this VM. So let me just select key which I have.

    Then you need to set up a password for remote login. After you type password, you need to select VPC and the subnet, which most likely you already set that up by your administrator, and you can select the right one will have the public access interface. Then you just scroll down and acknowledge the we go regarding address resource. They just click the Create stack. Now you need to wait about 5 to 10 minutes till this machine is ready.

    After a few minutes, CloudFormation Template finishes the deployment. At that moment, you can go to EC2 cloud and to find out the machines. Here are the machine names we have. And from the networking type, you will find out a public IP address there. Now you can go to remote desktop to that machine. Now you get access to MATLAB's desktops, which have latest MATLAB 2020a installed.

    The next steps is install the GT-POWER and copy all data for to the machine Once you finish the installations, now you have whatever have on that remote machine. You can run the core simulations. Once they finish the simulations, you can go back to the EC2 instance and stop the instance. So by doing so, you can save the cost.

    Once you are ready for next simulations, you can restart this machine and do the similar workflow as previously. This is very straightforward workflow. Also you can customize AWS reference architecture provided by MathWorks, then modify it. Then you can run it with your own reference architecture by bringing this image. So you can provide a customized reference architecture have GT and MATLAB installed, then you can provide this image to your coworkers. If you want to do something like that, can contact us. We can guide you how to produce this type of customized reference architecture

    OK, thanks, Winston. All right, from here, what I want to do is wrap things up, and remind you of some of the key points that we hope you remember from this talk. So first is that you can use Simulink as an integration platform to make it easy to reuse existing simulation assets from a variety of different sources. For example, like co-simulating with the GT-POWER engine model, that I just showed a few minutes ago.

    Second, you can use MATLAB's capabilities to easily scale up running multiple simulated simulations, in parallel, on the cloud. And that's just like what we showed, when I was generating that mapped engine model from the detailed GT--POWER model, where we were running it up on the 32-core, AWS machine.

    And finally, the combination of Simulink's integration platform capabilities, along with the ease of scaling to large parallel runs in the cloud, makes it easy to manage that trade-off between simulation speed and fidelity. And that means if I don't need the really detailed model, I can very quickly get a mapped engine model. And I can do that over and over again, because it's automated.

    So that concludes our presentation. We hope you found it useful and informative. Feel free to reach out to either Winston or myself if you have questions, or would like to learn more about anything we covered today. Thank you very much for your attention and have a good day.

    View more related videos