Advancing Energy Analysis at GM with the New VERDE Toolchain - MATLAB
Video Player is loading.
Current Time 0:00
Duration 28:33
Loaded: 0.58%
Stream Type LIVE
Remaining Time 28:33
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 28:33

    Advancing Energy Analysis at GM with the New VERDE Toolchain

    Anamitra Bhattacharyya, GM
    Nathan Wilmot, GM

    General Motors has a long partnership with MathWorks, utilizing MATLAB® and Simulink® to develop proprietary tools for vehicle efficiency and longitudinal performance analysis. Ever-increasing reliance on virtual methods in all phases of product development required improvements to simulation tool efficiency, speed, and fidelity. We also acknowledged the benefits of increased collaboration through modular tool design, democratization, and co-simulation. Recognition of these needs drove a complete redesign of core tool structure and interfaces. The newly introduced VERDE (Vehicle Energy and Range Development Environment) toolchain builds on MATLAB and Simulink with App Designer and Simscape™ to address these needs. In this presentation, hear about VERDE core capabilities, new interface, unique features, and integrated pre- and post-processing that enable General Motors to develop conventional, BEV, and hybrid architectures for leading-edge fuel economy, range, performance, and global CO2/GHG strategy.

    Published: 14 May 2024

    All right. Good morning, everyone. Like said, my name is Nate Wilmot. I'm going to be here with Anamitra talking about our new VERDE toolchain that we've deployed in General Motors. We're going to lead you through a brief introduction and then get into some of the tool details execution, some of the VERDE features and use cases and then where we're headed next.

    So many of you, I believe, are familiar with GM's vision of zero crashes, zero emissions, and zero congestion. So in the Energy Model and Toolchain Development team, we really focus on the zero emissions part of this vision. We've got a strong team, and our main goals are to provide the analytical tools, virtual infrastructure, and user support to engineer efficient, capable, and exciting vehicles.

    We do this by managing the suite of tools. They are used to predict, analyze, validate core performance attributes, really mainly aimed at the things that I used to characterize as the stuff that happens when you push the pedal on the right. When we move into bevs and electrification, it also involves the pedal on the left quite a bit. But we're mainly aimed at the longitudinal motion of the vehicle and the efficiency and everything related to that.

    So before we get into the details of the tool, I want to lead you a little bit through General Motors journey over the last 20-25 years in the energy modeling space. It incidentally coincides with my length of being at GM. Everybody has their own lens on the world.

    So when I came in, we were 100% in-house tools and codes. We had some really talented people develop some very cool, tools that were able to do base predictions. And then, in the late 1990-- in 1999, 2000, 2001, we switched from the in-house tools to using the MATLAB Simulink environment. At the time, we were much more focused on single users, so everybody ran their own thing on their on their laptop or their desktop, and they ran essentially one run at a time.

    As we got into the later 2000s, 2010s, we expanded to be able to do more multiple runs or DOEs, build the capability, enable more people to run more simulations faster, and we started dabbling in the world of co-simulation. How do we start to connect some of these tools, whether it's thermal noise and vibration? How do how do we bring our ecosystem together a little bit?

    All right. So this is all good stuff, right? Our capability is going up. We're building user efficiency. This is exactly the path we want to be on.

    Now, I will say, at the same time, we also saw our global users start to decline. So there is some natural tendency here. Efficiency increases. You can start to do more with less. And GM was also on a little bit of a trajectory to right size.

    So there were multiple factors here. But again, the users were declining. We also saw that this space was rapidly evolving. So we had industry standards coming into play. Some of the other toolchains out there were doing more modern interfaces than what we had. And then the tool connections and co-simulation was becoming increasingly important.

    So we kind of hit this inflection point. So do we maintain status quo, and just kind of maintain our tool chain, and hope for the best? Do we look at maybe winding it down, declining this particular in-house toolchain and either move on to something else? Or do we look for growth opportunities?

    We obviously chose the latter because we're here talking today. So we chose to grow. We took a step back and said, OK, what are our biggest areas that we can grow, our biggest challenges that we need to address? And we broke those down into speed, scale, data management, accuracy, and fidelity, democratization, and co-simulation. So all these things that we wanted to wanted to address-- we wanted to make the tool faster, we wanted to be able to do-- again, increase the number of simulations. Go from single user runs to tens to hundreds to thousands of runs by a user.

    Manage our input and output data. The data management is a struggle in any large organization and how you communicate across teams. So we realized we needed to do better their. Accuracy and, more importantly, or also importantly, closed loop on that accuracy and understand how good the tools are. Look at not just aligning our metrics-- most tools can predict a 0 to 60 time or a range estimate within pretty good accuracy, but do you actually show the correct behaviors throughout that entire simulation?

    The fidelity-- how do you go from early architecture definition in a very coarse type environment where you're where you're downselecting technologies all the way to pre-production or production and beyond? And as John was talking about the SDV, how do you validate your software or your calibration changes before you push those out to the customer? And you need additional levels of fidelity to be able to do that.

    And democratization, getting more team members to do to engage with the tool chain, be able to use to use it, and get reasonable and meaningful results. And then, you'll hear this come up quite a few times through the presentation co-simulation, again, not just focused specifically on our domain or the energy space, but also work with the other domains, co-simulate with their tools, build the best models in the best tool for the job, and then connect those. So all of these led us to VERDE. So what is VERDE?

    VERDE stands for the Vehicle Energy and Range Development Environment. So it's used throughout GM. We've got close to 400 global users, again, across multiple domains. It is a forward-driven model.

    So when I say forward driven, many of you are probably familiar with that. It essentially means that we give the model an objective and a integrated driver will push on the virtual pedals to achieve the desired results. It's capable of pretty much doing any vehicle architecture in any scenario and any propulsion configuration.

    Again, the main focus is on customer-facing efficiency and acceleration response. Again, pedal on the right, pedal on the left. And we also have the results of this are the core inputs to some of our global greenhouse gas and CO2 strategy tools as well. I already mentioned the global users. And then, just to get an idea of the scope and scale of the tool, it's somewhere around 200,000 unique Simulink Blocks, 200 or more subsystems.

    And then, when we have a typical simulation-- so this is like an FTP or a or MCT cycle-- there's somewhere around 500 discrete inputs that user has to input or we have to have input for the user that are either tables or values. We get about 250 output signals and 270 calculated metrics out of the tool for a typical run. So one of the things that we get asked on a regular basis is, OK, so why in-house tool?

    You also saw some examples earlier of Stellantis Rivian doing in-house development as well. So why don't we use one of the prepackaged, commercial, off-the-shelf tools? There's companies that do this. They do it really well. They've got very capable tools.

    We step back and again look at the strategic advantages. So we think there's obviously some benefit to having control over your own destiny. We can shape and build the tool as we see fit to meet our business needs. Speed-- if there's a new technology that somebody says, hey, we want to take a look at this, we can sit one of our developers down with the technology initiator and really work up a model in the next hours, or day, or weeks. So we've got a very fast response time-- flexibility to build what is most appropriate for us to build at, at any given time to meet our business needs.

    There is definitely an aspect of confidentiality. By keeping a tool in house, you have a little bit More-- again, it goes to control, but you can control what goes out into the public domain and what you keep a little bit closer to the vest. History-- so I started off talking about the past 25 years. So we've had very talented people working on this tool, and the behavioral models, and how we manage things for decades. So all of that history is now embedded into this VERDE tool chain, and stepping away from that would be a concern.

    And then, to me, maybe one of the most important aspects about doing this in house is the learning aspect. One of the best ways to really understand the physics and the controls that go into a given system is to develop the tool. We see this over and over in our development communities. People love developing the tools because that's how they learn. If we pushed this outside of the company and had somebody else do it, we'd run the risk of missing out on that key learning that by being the tool developers we get to take advantage of.

    OK. So that's my part with the introduction. Now, I'm going to hand it over to Anamitra, and he's going to go through some of the tool details and execution.

    The VERDE has been around for a while, so most of this time we have been using a legacy. If you are wondering what it looks like, it looks this green-- sorry, the blue screenshot shown in the screen. It's a MATLAB-based toolbox, but we have used it for a while, so it's old legacy tool. So it has some complex navigation system. It is a little bit difficult for users to onboard because of the intricacies of that.

    And then we found it a little bit difficult to add advanced features. For all these reasons and a few more, we moved over to MATLAB App Designer-based new. It's a modern GUI. It has many advanced features. It's easy to navigate, and we can integrate additional features, like database custom, which I'm going to show in later slides, very easily.

    So what makes this new GUI great? So we have several user-friendly features. First, we have a ribbon with buttons that helps users to go to one place and access all the features from that location. We have intuitive navigation. How do we do that? We have multilayer tabs. We have dynamic tree options.

    We have interactive system graphics so we can go to the icons and then click and go to different parts of the model. We also have tree nodes which we can right-click and select different fidelity levels out of the available options. Then we have, of course, parameter selection, which could be from user or it could be from database.

    Next is the model. The GUI interacts with the model and autoconfigures the model for running simulations and provide results. We have several key features. We have Simulink variants. We have reference subsystems and models. The example here is a reference subsystem.

    And then we have bus architecture. Many examples are there throughout the model. We have also Simscape toolbox integrated throughout the model.

    It's a fairly complex model. We have about 2,000 parameters. We have 100 variants and thousands of parameters throughout the model. With this main GUI and model, we have some supporting tools. These are pre-processing tools I'm here talking about.

    So first we have a tool to compare and edit different models. We have a tool to select create DOEs. So some predefined DOE options are in the tool, and user can select them and configure DOEs. We have a tool to manage input data from the database. Database is another topic I'm going to cover in the next few slides. And then we have a tool to submit and retrieve jobs from the high-performance computing cluster.

    Next is the post-processing. Once the model is run and we have results available, we can use these tools to post-process results and generate reporting. We have these tools listed here as a part of another tool called plotter. The plotter itself can plot time history data. It can also import data from other sources and also export MATLAB-native data into other formats as well.

    And an example of the list of tools, we have the energy balance analysis tool. This can be used from the plotter. Then, once we have the model results available, we can also use an automated model correlation tool to compare model results with test data. With some predefined metrics, we can perform analysis and see how good the model has predicted compared to test data. Next is the executing a VERDE model.

    So with all these tools, you may be wondering how the typical user interacts with this model. So its steps are pretty much same for EV, ICE, hybrid, fuel cell-- pretty much all the General Motors platforms. So the process starts with a model definition file. We can update the architecture, import data from the database. We can have updated controls.

    We can update simulation settings, meaning what profiles you are running. If you have co-simulation, you can integrate at that point. With all that, we can run the model. That can be done in PC or HPC environment. And then we can use the post-processing tools that I just showed before to perform analytics and reporting.

    And the last step is the closed-loop learning. Here we can get learning from the users and improve the model for future users. For all these steps, we have integrated learning and training. This is facilitated by our extended team. Some of the members are sitting here in the audience.

    Then we have the model prediction. So how do you use the model prediction? So here we are talking about the standalone use-- standalone uses for energy users. So here we are talking about things like ICE, fuel economy prediction, our newer EV range prediction. Other part of the range and fuel economy is the performance, things like IBM to VMAX, max vehicle speed, max grade capability, et cetera.

    Then we have a drive quality tool integrated with the model as well, our tool chain. And we can also generate profiles from VERDE that can be used for other subsystem analysis, meaning durability assessment, NVH assessment, et cetera. Then we have extended tool use with the tool connection. So here we are talking about things like thermal models integrated with VERDE, mostly GT thermal models for ICE or EV applications.

    And we can also run the model in driver and loop scenario where we can use the model for NVH assessment. The next two is the driveline dynamics and lap time simulation. Here we are integrating with other tool chains already available in GM. Lap time simulation helps us to find out how much time the vehicle takes to go around the lap with or without thermal models integrated with that.

    And then we are in the next section. This is about VERDE features and use cases. So I'm going to talk about a few useful features of VERDE, and then we'll show a few use cases of VERDE. As a recap, we have a semicoupled tool and GUI. It's a flexible tool. We integrate co-simulation into the environment.

    We have a high modular tool that can support different architectures. We can run many iterations using HPC. We also have a database integrated into tool.

    So with that, I'll go to the first section, which is the semicoupled tool and GUI. So we already seen the VERDE can generate a model setup file. You can send it to the VERDE model that you can run in PC or HPC. But then we have these other tools which are DOE setup tools and batch processing tools. These are used to generate multiple model files maybe with one or two clicks, and then we can send those again to model, and it can be run or HPC using the tools that I discussed before. We can post-process, generate results, and provide that to users.

    Next is the co-simulation. Here we are talking about commercial off-the-shelf tools. The VERDE is at the center. We are talking about all the tools that we are integrating with VERDE. The idea is not to recreate all this in VERDE but use whatever is already available in GM toolchain and then integrate with VERDE.

    So it support direct co-simulation for thermal systems for EV and ICE and emissions and a few others for ICE. And then we have lap time simulation and vehicle dynamic simulation also integrated with VERDE. We support Virtual HIL. This is for production control integration with VERDE with user-defined interface, not the VERDE interface for that.

    We also support FMI 2.0. For example, GT Thermal models, AMESIM models, high-voltage battery models. And the other side, we are also trying to work with MATLAB to MathWorks to convert some of our models into FMI framework. And then we have the standalone use where we take some parts of VERDE and then convert them into using other systems in GM. Examples are vehicle dynamics simulation, lap time simulation, et cetera. Then we have the ability to run the model in real time where you use for driver and loop with for NVH simulation.

    Next is a database. So traditionally VERDE has been used with an individual user-managed data file that is locally stored. The database fixes a lot of these issues. It's called inspire database. It stands for innovative simulation parameter inputs and results environment. It's a relational database.

    We can store all the data that's used to parameterize the model into this database, and we can retrieve data in sets or groups of data to load in the VERDE GUI. It allows us to automate, and then we can do different data analytics on data stored in the database. So this picture here.

    And then flexibility. So hopefully it's a very simple slide. We have the different architectures on the left-hand side. And then, out of the three options, we can select different fidelity models, like a different models for drive unit, different models for the high-voltage battery. And then we can select different drive scenarios. I have here fuel economy-related drive cycles, performance, and generic drive cycles listed here.

    Same works for the controls and calibration. And then we have different options available for co-simulation as well. This could be one or multiple simulations that we can call from VERDE and run simulations.

    So with all that, how do you use all this flexible architecture in a typical scenario? So this is an example of how we use different fidelity models. On the x-axis, we have program timing. On the y-axis, we have different fidelity models. This is for an EV example. We have, in the program framing stage, we have simple R constants or time histories as an input. And as you go for refinement, we use co-simulation.

    This is an example of thermal co-simulation predicting some thermal profiles. And then during closure to production, we can go for even higher fidelity models, integrating the production control with the model and then produce results that are closer to this data. Next example is AMESIM. This is, again, using of a higher-fidelity model in co-simulating with VERDE.

    Center again is the VERDE. We have AMESIM models that houses higher-fidelity vehicle models, higher-fidelity drivetrain models. This interacts with VERDE model using these signals. And we also have a GT thermal model integrated with the toolchain. And these signals are exchanged between the three models, and then there is a loop completing and sending signals back to VERDE. And we can use this as a setup to run any drive scenario, provide results, and do analytics using all the tools that I have shown before.

    And the last example is the NVH simulator. We are still talking about VERDE, not a gaming console. So the video on the left-hand side looks like that. But we are still running the VERDE model using a NVH simulator software.

    In this case, the actual pedal and brakes are used to control the VERDE model. On the right side is a video similar, a little bit simpler, where the software sliders are used to control the VERDE model. Also in the loop we have a connection to connect the hardware using PEAK CANbus, which is connecting to both hardware to produce noise synthesis analysis.

    With that, I'm going to Nate who is going to cover the rest of the presentation.

    So closed-loop learning. So how do we know that the VERDE model is doing what we want it to and what we expect it to do? So we want to look at it for both metrics and behaviors, as I mentioned before. We want to make sure that the information in the generation of these metrics and behaviors are easy and repeatable for the users and then use that feedback to highlight where we need to change, where we need to grow and make our make our toolchain better.

    So relative to how good from a metric standpoint, this is just one example of BEV range analysis, looking at the vehicles that we've put out over the last few years, comparing our final model to the average of tests that go into dictating our final label. And what we found is that the R-squared on that is 0.998. And we're about 0.4% pessimistic where, honestly, if you have the choice between your model being a little bit pessimistic versus a little bit optimistic on range, if you're not right on zero, you want to lean a little bit pessimistic.

    I also talked about the behaviors. So we've developed a system where we take the time history traces and we look at what the model says, what the test says, how close we expect those to be, and objectively score those alignments. And we use those to generate a report that tells us, OK, how good is the model relative to aligning with the test?

    And what we've seen is actually pretty insightful, in that if we take a given test and put in the driver trace from that test into our model and a couple other test-specific items, we actually get better alignment between the model and the test in this quantitative method than a test compares to another test of that same vehicle. So with the right inputs, we would like to claim we're better than test-to-test variation.

    OK. So did we make the impact with VERDE that we set out to do? So Anamitra led through a lot of the specifics on the details and the features. And as we cycle back and look at this intro slide of what we set out to do, things that we've done for speed, for democratization, for co-simulation, for data management, we think we're checking a lot of the boxes.

    The other thing that I mentioned all the way at the beginning was our user community. So have we seen any change there? We actually did.

    And again, I said we were running somewhere around 400 users, and it was declining for the past decade or so or eight years or so. We did see a nice spike when we launched VERDE, and got this new toolchain out, and started embracing a lot of the co-simulation activity. And it wasn't a small change. I mean, this was a 40% increase in our users that were utilizing this toolchain.

    So some of the lessons learned for other teams as far as, OK, what have we learned? How does this work so well for us? The first is you have a core team of highly capable tool developers that thrive in this space. They don't just like energy modeling, they thrive on it. They love it. This is what they want to do.

    We've aligned on clearly-defined and consistent tool releases. So every March 1, every September 1, the whole user community, the whole developer team knows we're going to be putting out a beta-tested suite of tools, and it's going to be the same time every year. We follow that up with what we call a what's new event that rolls out to the users what's changed. We support the core tool with a whole bunch of other features pre and post-processing, automated report generation, and then, again, we have a bunch of user support tools that help build efficiency.

    Welcome/embrace cross-domain collaboration-- so this is sometimes harder than it should be, right? Reaching across and saying, hey, we want to work with the other domains and make sure that we're all in an integrated toolchain. It's a lot easier to put on the blinders and focus on your space. We're making very concerted effort to not do that and make sure that we're connecting with others.

    Provide the training, documentation, resources and support a help desk. So anybody, anywhere in the world that's running one of these models can IM or pick up a phone and talk to one of our team and get pretty immediate help. And then the other thing to mention is reaching out to the MathWorks. So when we started doing--

    [APPLAUSE, LAUGHTER]

    Thanks, Cody. So when we started doing more with Variants, and App Designer and building up this large model or evolving this large model, reaching out to Cody, and Brad, and the MathWorks Technical Team-- Brad's going to be talking about large model in a couple presentations that he's been a big help with our journey here. OK. So where are we headed next?

    Really, to some extent, it's a lot of-- it's more of the same, right? Continue to get the varying fidelity levels and controls integration. Take the database that we're launching and have connected to our toolchain and start more aggressively pulling information out of that-- mining the data and understanding where our strengths weaknesses are, how different components relate to each other, continue to improve the model accuracy and precision. Can we take that 0.4% pessimistic and make it 0.04% pessimistic?

    Keep going with co-simulation. Do more in that space. That's where we see a lot of the future growth. That's probably where a lot of that spike going to in users with VERDE is coming from is in that co-simulation. And with co-simulation and those more complex models, how do we make them run faster, and how do we build them faster? How do we set up our teams to operate more efficiently together?

    Drive cross-enterprise alignment. So with co-simulation, you need to have all your teams operating on the same cadence, working in the similar or same ecosystem. So we're doing a lot of work there. And then integrating new functionality into VERDE ecosystem. We're looking at how do we do more in the durability modeling space. And then VERDE is a forward-looking model. How do we accelerate some of our architectural studies by incorporating some backwards looking that we can run even more studies faster?

    All right. With that, I want to say thank you to the GM team that helped with developing the model and with putting this presentation together, also the MathWorks team for the support and the feedback on the presentation as we've prepped it.

    Thank you.

    [APPLAUSE]

    View more related videos