Online Panel Discussion: Agile Vehicle Software Development and Effective Integration of Models
Overview
Join engineering leaders and experts from Ford, Continental, Volvo Cars, and MathWorks for an online discussion on the use Agile, continuous integration, virtualization and models for accelerating development and integration of complex software functions.
About the Presenters
Nate Rolfes, Ford Motor Company
Nate Rolfes is the Modeling, Software, and Hardware in-the-loop (MiL, SiL, & HiL) simulation technical expert for the Driver Assistance Technology Vehicle Systems Integration department (DAT VSI) at Ford Motor Company. He leads the model-based systems design and development of DAT features and provides training and technical guidance on usage and deployment of model-based tools and methods within Ford. He has over 15 years of experience using model-based methods to develop and test distributed software control systems to bring features such as Active Park Assist and Trailer Backup Assist to production. He graduated from Carnegie Mellon University with a B.S. in mechanical engineering and is currently pursuing a Masters in Product Development from the University of Detroit Mercy.
Martin Römpert, Continental Automotive
Martin Römpert holds the title of “Expert Model based Development” in the Autonomous Mobility and Safety division at Continental Automotive in Frankfurt, Germany. In this role, Martin advises various Continental teams on best practice for software development using Model Based Development. Previous to working at Continental, Martin was a Team Manager at ITK Engineering AG. Martin has a Electrical Engineering degree from TU Karlsruhe.
Jonn Lanz, Volvo Cars
Jonn Lanz is Technical Leader in Agile Software Development Volvo Cars R&D. He is working with the new generation electric cars, introducing much more software, connectivity and automation into this robotic industry. The work is focused on transforming the architecture to support over the air software updates and continuous improvement of customer's vehicles, but also with side projects as experimentally A/B testing pilots and engagement in Volvo Cars external collaboration. Earlier engagements at Volvo Cars include development of automated integration of Simulink generated code in AUTOSAR ECUs, testing and test automation, especially at the electric propulsion systems developed the past 10 years.
Jim Ross, MathWorks
Jim Ross is a Senior Principal Technical Consultant at MathWorks. Jim works with customers to address embedded controls development challenges and implement Model-Based Design. With over 25 years of industry experience, Jim designed air system and emission controls for John Deere diesel engines before leading the adoption of Model-Based Design, first for the Engine Controls group and then across the entire Enterprise. Prior to retiring from John Deere in 2020, Jim led a Virtual Design Verification team responsible for supporting process and tools related to Model-Based Design, SIL testing and HIL testing across the Enterprise. Jim received his B.S. and M.S. in electrical engineering as well as an M.S. in aerospace engineering, all from the University of Illinois at Urbana-Champaign.
Vinod Reddy, MathWorks (Moderator)
Vinod Reddy is a consulting services manager specializing in enterprise deployment and optimization of Model-Based Design and principal author of the Model-Based Design Maturity Framework™. He works with global companies in a wide range of industries, including aerospace, defense, and automotive, to implement Model-Based Design and improve their development processes. Vinod has over 25 years of experience in designing embedded products in the telecommunications and automotive industries. His experience includes extensive work in control system design, embedded systems, and code generation tools.
Recorded: 19 Oct 2021
I'm honored to be running this panel today. Our panelists are visionaries and thought leaders in their respective organizations. They have practical experience implementing these methods from end to end, including tools, and deploying these solutions across the enterprise globally. In this session, our panelists will share what they've learned, give you the best practices, and answer your questions.
So now we will move on to meeting our esteemed panel members, starting with Nate. Nate Rolfes is the modeling, software, and hardware-in-the-loop simulation technical expert for the driver assistance technology vehicle systems integration group at Ford Motor Company. He leads the model grade systems design and development of driver assistance technology features and provides training and technical guidance on usage and deployment of tools and methods within Ford. He has over 15 years of experience using model-based methods to develop and test distributed software control systems such as active park assist and trailer backup assist. He graduated from Carnegie Mellon University with a bachelor's in mechanical engineering, and is currently pursuing a master's degree in product development from the University of Detroit Mercy.
Nate, welcome to the panel.
Thank you. Honored to be here.
Great. So Nate, a quick question for you. You've been using agile methods for a while now. You gave a presentation at MathWorks Automotive Conference in 2020 on model based agility. Can you briefly mention what you've been up to since then?
Yeah. I think some of the things I want to talk about are how we take agility and use it for simulation purposes. And though agility often is kind of-- instantly think of CI and things like that, but I think there's a lot of tactics we can use for modeling and simulation with various fidelities, and models, and things like that, that I'd like to talk about a little more.
All right. Thank you, Nate. Welcome to the panel, again.
Next, I'd like to introduce John Lance. John Lance is the technical leader of agile software development at Volvo Cars R&D. He's working on new generation electric cars, introducing a lot more software connectivity and automation. His work is focused on transforming the architecture to support over-the-air software updates and continuous improvement of vehicles. He also works on many other side projects internally and externally with Volvo's partners. His prior work at Volvo Cars includes development of automated integration of signaling generated code in AUTOSAR ECUs, testing and testing automation for electric propulsion systems, and this has been over the past 10 years.
John, welcome, and glad to have you on the panel.
Thank you. Also honored to be here.
OK, John, I have a quick question for you. So with the car industry shifting to electrification autonomous, it is rapidly transforming from mechatronic towards a digitally defined industry. John, how is this going at Volvo cars?
I think the trick there is to remember that the car is still mechatronic. And you have We have to treat these as layers. We cannot reinvent the whole world. We need to navigate here, based on knowledge. And we have to be smart. So this is a new domain. Not many companies have done this before. We learn a lot now in a very short time, but it is hard.
Thank you, John. We're looking forward to learning more when you present.
Next, I'd like you to meet Martin Rumper. Martin holds the title of expert model based development in the vehicle dynamics division at Continental Automotive in Frankfurt, Germany. In this role, Martin advises various Continental teams on best practices for software development using model based design. Before working at Continental, Martin was a manager at ITK Engineering Engineering. Martin has an electrical engineering degree from
Martin, pleased to have you on the panel.
Yeah. Thank you for the introduction. I'm happy to be here.
Excellent. So I also have a question for you. Continuous integration and continuous delivery is commonly used to test models or code versus requirements. However, you think it shouldn't be limited to just that. Can you briefly tell us why?
Yeah. From my point of view, CICT should be enhanced to SIL, PIL, and HIL tests. So the ISO 26262 strongly recommends these tests as well as the reference workflow of the IEC Certification Kit. And at Continental, we use an RX PIL target developed together with MathWorks to improve the code quality of our brake systems. The PIL tests are handled the same way in chink and CICT as the MIL and SIL tests. I'll provide more details about our CICT pipeline in my presentation.
OK. Thank you, Martin. Welcome to the panel.
Next, we'll meet Jim Ross. Jim Ross is a senior principal technical consultant at MathWorks. Jim works with customers to address embedded controls development challenges and implement model based design. During 25 years of industry experience, Jim designed air system and emission controls for John Deere diesel engines before leading the adoption of model based design, first for the engine controls group, and then across an entire enterprise. Prior to returning from John Deere, Jim led a virtual design verification team responsible for supporting processes and tools related to model based design, SIL testing, and HIL testing across the enterprise. Jim received a bachelor's and master's in electrical engineering, as well as a master's in aerospace engineering from University of Illinois at Urbana-Champaign.
Jim, welcome to the panel.
Thanks for having me, Vinod
All right, Jim, so just like our other panelists, I have a question for you before we move on to the presentation. So you've seen agile methodologies in practice at many companies, and you have personally implemented one. So agile defines a two week time frame for a sprint or a sprint. So the question to you is, does MBD readily work with agile to support this time frame?
I've often been asked how to do model based design in an agile framework, but I think that's the wrong question. Our goal should be agile model based design. Like any complex system, optimizing the parts of this process doesn't optimize the whole process. And so we need to adapt both agile and model based designs so that they work together seamlessly.
I'm going to talk about dispelling a couple of myths that cause resistance to agile model based design. One is just that, that I can't simulate the system in a two week sprint, as you mentioned. And the other is that my demonstrations at the end of a sprint must be done on vehicle.
Thank you, Jim. Looking forward to that talk, there. So now, I guess we'll move on to the next segment where we will have each panelist present a short talk. After that, we will move on to the Q&A session. So we'll start off with Nate Rolfes. Nate, the virtual floor is yours.
All right. Thank you, Bernard. So again, thanks, everyone. I wanted to start with a question today, or maybe a problem statement, to kick this off. We all have complex systems to build, many of us doing complex software systems. How do we simulate them with agility? I spend a lot of time at Ford dealing with this question. And I've selected some quotes here that I like to use as heuristics to address the mindset we need for this simulate with agility goal.
First is, "Furious activity is no substitute for understanding." Even worse than modeling not enough, I think, is over modeling. I've seen many teams build large, complicated models, and it seems the goal has kind of been forgotten, or they're just building the model for model's sake. We need to think modeling just enough for what we need with agility. Make sure you understand what's in the model and what it represents. Oftentimes, it takes much more effort to build an elegant, smaller model that achieves its purpose than it is to just build a giant kitchen sink model that no one really understands.
The second is, "We aim to make mistakes faster than anyone else." Simulation allows us to fail fast. We should use this for our advantage. Again, your model can't be so bulky that it takes dozens of minutes to compile and runs at fractions of real time speeds to achieve this goal. It kind of defeats the purpose. So I like to build small, lightweight models that we can learn and prove-- fail fast, learn fast, and prove fast on.
The third point is, "When you're curious, you find lots of interesting things to do." I believe exploratory testing is a key aspect of failing fast, and also discovering the emergent behaviors in our simulations and our systems. I'd rather simulate to discover what I don't know, than just write a bunch of test vectors to check that my model does what I already do know. So I'm a big fan of human or tester-in-the-loop simulations, or simulators where you get to try and break the systems by trying different things, very much like how we do it in vehicles, traditionally. I like to try and recreate that environment in a model.
So how do we do this, apply this into the automotive domain? I have really four quadrants I look at, here. The first quadrant is really training and accessibility. I think enterprises often significantly underestimate the true cost of training, especially in these technical tools, highly technical tools that we're using for modeling. So I'm a big fan of MATLAB's Simulink tools, because there's such a huge global user base. They have a lot of really good training opportunities, on ramps, things like that. You can pretty much Google anything that you don't know. I do that a lot.
And the other part of it is making sure your enterprise focuses on floating licenses over node-locked licenses. I can't tell you how many times we get stuck because we've only got three licenses of a certain software, and those three licenses are locked to three engineers, and one of them left, and now we've got to figure out how we can move on. So really, floating licenses so anyone can use it, and you can collaborate together, which kind of leads me into my second quadrant about decentralizing and inter-sourcing the development operations.
While you can't decentralize everything in a large enterprise, I think modeling and the development operations around modeling is a great place to start. For example here, centralization can be a single failure point. And I can't tell you the amount of times I've given a modeling project to an engineer and they get pulled off onto another project, and before I know it, it's three or four weeks on something I thought was going to be one week.
I get some friction with engineers when I do this strategy, but a lot of times, I'll give the same modeling project to two or three different engineers. And they get frustrated because they want to be the one to develop it and do it and deliver it. But then that protects me against the single point failure. And what I actually find out, is I get two or three different answers to the same problem that are completely different. And we can end up combining those to get a superior solution.
That goes along with, I'm a big believer in Linus's law of, "Given enough eyeballs, all bugs are shallow." Spreading out your problems to your development community and your DevOps in a decentralized manner really allows the free market ideas to really find the best solution that you can implement. So building strong DevOps communities across organizational hierarchies, rather than being dependent upon whatever hierarchy you're in.
The third part is a framework for modeling levels of abstraction and fidelity. I would say this is our biggest bottleneck for agility right now. There's lots of road construction going on here, I like to say. The idea that models are just a binary yes or no versus they're a spectrum of fidelities. And because software is invisible, model complexity is often invisible.
So I like to use mechanical CAD as an analogy to help people understand this point, where you've got the levels we've arrived at as a skeleton of behavioral requirements and implementation. And really, your skeleton is kind of like your hard points in a CAD model. Your behavior may be where you start to put in bushing rates and you can do some vehicle dynamics curves and things like that. Your requirement model's really to a point where you can actually build a part based off requirements. You could give it to three or four different suppliers, and they're pretty much going to build that same part that does the same thing.
Whereas your implementation model-- a lot of people think of an implementation model as still kind of modeling. I almost think of it as a production park. So at that point, your implementation model is really your production code model that you can just go produce a software part for. And just because it's an invisible software part, doesn't make it any less of a part. So I like to emphasize usage of the three prior models before that.
Lastly, the automating the virtual builds. People typically equate agile with CI, Jenkins, this type of stuff. This is important, but it's only a piece of the pie. As you can see, it's just one piece of my quadrants here. We did a presentation on this, MathWorks Conference 2020, me and my team did. So if you want to know more about that, it's pretty in-depth. I won't go into it too much here, but it is a key piece in automating things using Jenkins and using the MathWorks tool set is something we've done a lot of. But as I said, I'll save that for the already done presentation.
But with that, I think I can go ahead and hand it off to the next panelist, because I think I covered the area I wanted to hit.
All right, Nate. Thank you so much. That was really an excellent insight into how we are applying agile methods at Ford. I guess my favorite quote is, "We aim to make mistakes faster than anyone else." I love that.
So Nate, I have a question and then a follow up. My first one is, I agree with the very important point you made about having the right fidelity of models to enable agile framework. So can you share your experience regarding the process you've created internally to ensure this gets done effectively within an organization? I'm not talking about one or two individuals doing it, but is the whole organization participating in creating these by fidelity models?
Yeah. It is a challenge. It's not easy. And it involves really building awareness over time and building up that DevOps community of decentralized folks. If I can focus in the lower third quadrant on my slide here, you see the two under-construction areas.
So one of the areas we're really looking at is we have modeling teams that build system L models. And the system L models have some pretty good behavior stuff. I really like StateFlow, and they have state machines in system L as well. And we find that that state machine translation from system L to StateFlow is probably the cleanest translation we can have. And so we're really focused on that area of trying to build-- rather than trying to build our behavior models from scratch, but utilizing some of the system L artifacts to give us a head start. So that's one area that we're really focused on trying to improve things to help us with our agility.
The other area is, you'll see that other lower arrow they're under construction, we have a lot of what we call implementation models that are capable of building our production code. But the problem with these models is they often, if you put five or six of them together into a big system simulation, you could easily take 40 or 50 minutes to compile that model, and it runs at super slow speeds. So we can't plug that into a hardware-in-the-loop or anything like that, because you need real time speeds to run hardware-in-the-loop.
So there's a challenge and we've been successful here in a couple of places where we've been able to take these really heavy models and, I call it deproduction coding them, where we take out a lot of the data typing blocks and a lot of, I call it, the production code baggage that isn't really needed for just system level simulation. And we've found we've been able to take these same models that take dozens of minutes to compile and get them to compile in less than a minute with sim speeds that are fast, 10x real time. And what that allows us to do now is take these models and simulate them. Now, we can't production code them anymore, but we get the exact same behaviors they had in their production form.
So these two strategies are areas we're really focused on right now. And as you can see, we're really primed on that behavioral model category because we think we skip that step a lot of times, and we go straight to the requirement or even implementation model level. And so building up those behavior models is an area we really think we can get over our agility bottleneck in simulation.
Thanks for the details, Nate. So one quick follow up question, quickly. So now that you have the right fidelity models, right, there are many of them. So how do you ensure people actually find the right one and pick the right one for the right purpose? So it's
Yeah, that's been a challenge as well, because we use GitHub a lot of times for most of our-- So people are always constantly checking into GitHub and making branches. And I have to admit, we do end up with the-- I think most people would be familiar this. Just the GitHub junkyard of branches, where you've got 20, 30 branches of-- and you've got to go back and clean them.
And one of the areas we found, is originally we tried putting different levels of fidelity models on different branches. One branch would be this level fidelity, another branch would be this level of fidelity. But there's just some logistical challenges with that, because it's not easy to just model in GitHub natively.
So we actually found it's actually easier to use like Simulink variant subsystems to select. So we'll have what I'll call 150% model, or a universal model, where you'll have one model that you pulled out from GitHub, but it may have five or six different flavors of fidelity using a variant subsystem. And then we can quickly switch between those on the fly. So if I want to switch from my behavioral model to my requirement model, it's just a quick change in my M script to switch the variant, rather than having to go back up to GitHub and pull it down and all that kind of stuff. So that's one area where we've figured out a flow that we like, and hope that way it really cleans up our GitHub branch junkyard, because we just have one branch then.
Thank you, Nate. That's an excellent tip. So I'm sure our audience will have more questions for you.
Again, this is your chance to learn more from Nate. Please submit your questions. Ask some really tough ones, if you will.
OK, next, we'll hear from John about how Volvo Cars is using agile methodologies. John, over to you.
Yeah. Hi, everyone. Let's start where we were before. And I salute everything that Nate said here. Very good. I fully agree.
I would lift my eyes a little bit here and look on the whole company, just to give a little different angle. And if you consider first these circles here. This is the R&D we had or have now as we worked or are still working. I mean, at R&D we developed cars, and then you send them out on a market. But in this world, you lose the car, you lose the control of the car as soon as it's sold. So in this way, you use classical agile development with speed. You optimize on speed. You just push out as many cars as possible on the market. You customize them for your customers so that you can sell a special car to every customer. This means that you have a tremendous amount of branches. You will have parallel projects. There is nothing like a main track here. So this is the old world.
And if we can switch-- Yeah, perfect. Thanks.
In a more modern world, and we are approaching this. We are in this world today, with two car models. And we are learning this. In this new world, you have R&D in the middle. And you have a manufacturing, and you have sales, and then you have other based development on the market. So every car in the fleet, out in the customer's garage and in their hands, is under development. So we develop these continuously. You can refer to this as continuous deployment, but you don't have to be that fast, actually. But you have to keep track on your branches. You have to keep control of your vehicles. But on the other hand, you will have data coming in from every car, so you can really scale data collection here. You can be data driven.
So I mean, this is a very new world. And in this world, I would say that agile model driven designer, yeah, it's much more about C++. You write much more code. I saw the question here in the Q&A, and yeah, you need code to a great extent here, but you also need models. And if you work more on a main track here, on the right side, then of course, you will have to detail it, you need to change a little row in your code, and you need CEI, and you need acute framework. You need virtual testing. Model driven design now is here, much more about testing.
So if we take, then, the last picture here, then I see this as a new stack. And the car model is in the middle, in this figure to the right. But above that car model, inside that car model, is a complex stack of software. But for each and every car model, only a few things are changed, a few modules are changed from the others. And all those modules here are tested and developed in an agile manner, but continuously over a long time. So every change is now backward compatible.
So the story I tell you now, this is much more about moving from the classical agile development to continuous integration. Continuous integration is the new agile. It's agile in a new way, because the software lives over several years. But when you find the feature there, or a bug in the customer's vehicle, you need to react fast. Bang. Solve the problem within days. This is a different way of agile. And I focus not as much more on model based testing.
So I didn't check the time here now, but in this world now, we focus on a few variants, few models. Much more digital perfection, digital personalization, not so much customization. So I think I'm done there. I didn't check the time. Sorry.
Thank you, John. Appreciate that. So it's an interesting look how code and model based design are coexisting in this modern automotive software stack. So a quick question for you. So in the environment that you just described, you have both models and code that will have to be integrated at various stages for simulation testing and deployment. So the question is, how do you sort of, you know, code that you've written, and models have to be brought together at these various stages. Can you share some tips on how you do this?
Yeah, sure. First of all, everything is in Git. Very simple. Everything is controlled by Gerrit or GitLab. You have what you could call agile processes for everything. And models and code coexist. So the model is a source code, but the code generated from that model, if you use model-- of course, if you use code generation, it should also be maintained and saved so if you need it for maintenance manners and other purposes. But you have to treat everything as source code. That's very important. And you have to keep track of all the files. You need to learn about manifests. Control your builds. Control the CEI. Put every resource you have in CEI. Minimize the variants. Look at everything from the product view. Oh, there are so many things. I have not time for everything.
All right. That's excellent. Thank you so much. Audience questions are already coming in, so stay tuned. We'll be back with you in a few minutes.
So next, we'll hear from Martin from Continental, of how he implemented a complete pipeline. Martin, please go ahead.
Yeah. Thank you. So most of you may be familiar with the agile software development values. Nevertheless, I put them on the slide here. So if you have a look at these principles, you may realize that they are very abstract. And besides agile, there are many other buzzwords used in this context, like DevOps or continuous integration and continuous development.
So what's behind these buzzwords? So DevOps is simply a portmanteau built out of software development and IT operations. In simple words, DevOps is a practice in principle in the system development life cycles that uses agile principles. And CICD is a DevOps based best practice that focuses on automating the whole development lifecycle pipeline. And of course, it's implemented with specific tools.
So the problem that needs to be solved to become agile is to find the right tools and to automate a pipeline that is supporting DevOps and the agile software development values. The root cause of the problem I just described is that, from my point of view, most of the tools that are already in place in companies are not suited as a basis for agile development, but which tools to use and how to integrate them.
To get an answer to these questions, you have different possibilities. You can design the whole tool chain in a top down manner, or you can try the bottom up approach. In my role as an expert for model based design, I contributed to the development of our internal model based framework called MAMBA. And we soon realized that there are a lot of things to do for a developer to be able to provide an ACLD compatible work product at the end of a development cycle.
We tried to answer the following question. How can we make the life of a developer more comfortable? Our idea was to use our tools for software versioning, Git and GitHub, and connect them to the Jenkins automation server. We picked out one model based development project and together with the developer, we tried to automate all these repeating tasks, most of them, until now, executed by the developers in a local MATLAB Simulink and polyspace installation.
As you can see here, the green bubbles on top of this slide represent the stages of our Jenkins pipeline. After a developer commits a new change to the Git repository and pushes it to GitHub, all these things are automatically executed for him. Jenkins runs model advisor checks to ensure model quality, MIL, SIL, or even PIL tests in Simulink test to verify implementation of the requirements and validate code generation as well as target compiler and linker. It runs Design Verifier for property proving a test vector generation, Polyspace Bug Finder and Polyspace Code Prover analysis with upload of the results to Polyspace Access server to ensure misfire compliance and the absence of runtime errors. And of course, we generate a lot of documentation and reports until we finish the pipeline.
After the implementation of the first prototype of this pipeline, we asked other developers and small teams of developers if they want to try out what we implemented. As the pipeline was doing a lot of work for the developers, even if the pipeline was not perfect at the beginning, we had a lot of positive feedback and ideas for improvement provided by the users. Over time, the pipeline became more and more robust and features were added.
Today, all model based projects in our business unit use this pipeline. At the moment, we are integrating JFrog Artifactory to store and manage artifacts generated by Jenkins. And we plan to use Conan package manager to be able to reuse artifacts we built already. In parallel, we have a close collaboration with MathWorks to be able to provide more scalable processes and workflows, including automation for our developers in the future. Thank you.
Thank you, Martin. It's really great to see that end to end workflow they have the entire pipeline built. That must have taken a lot of work and time to resolve tough issues. So I do have a question and then a follow up. So my first question for you is, I'm wondering how you actually rolled out the process and tooling to various teams? How did you start small, like maybe start with the team or two and then roll it out to others? How did this happen? Any lessons learned that you can share with us?
So as long as the users see that the pipeline helps them to do their daily work, it's really easy to pull them in one after another, one small team after another, and make them contribute to this whole solution.
OK. Thank you, Martin. So my follow up question is, we have teams that are using models and some are doing purely code. So is it possible to standardize agile processes between teams using model based design and traditional methods?
So from my point of view, you need to take care about model structures and repository structures, the use of MATLAB projects to be able to handle all projects using the same Jenkins pipeline. And we implemented a scripted pipeline, and this pipeline was written in a separate Git repository stored on GitHub. And we are able to handle all our projects with this one pipeline.
All right. Thank you, Martin. We'll come back to you again because our audiences have questions for you.
Next, we'll hear from our final presenter, Jim Ross. Jim, over to you.
Thanks, Bernard. I want to break down the two myths I mentioned earlier. I'm going to start with the first one, that you can't do system simulation in a two week sprint. And the reality is that we want to integrate our models early and unlock the value of model based design, that early and continuous validation.
So we're looking at a common workflow, one that I've seen multiple times. It's really not very different from a hand coding workflow. And the question, is it feasible to deliver a component in two weeks? The answer is yes and no. You could probably get through all these steps and deliver a component in two weeks, but it's probably not going to meet your definition of done, because you're probably going to come back and find some issues and have to revisit it if you step forward. The requirements issues here, which are quite common, are found late, and this results in a lot of rework. This idea of developing and implementing the component and then validating it, you're either going to end up with multiple iterations in a sprint on that component, which is probably not very likely, or you're going to end up touching that component in multiple sprints, which is definitely not very agile.
We already heard from Nate that model based design offers a better way through simulation. And so if we step forward here and we add two steps to our process, we can now architect and integrate those empty shell models very early in the process. This allows us to rapidly iterate on architecture, then go on and develop the components. Here we can do some sunny day type development, where everything's working, and do just enough testing on the component for confidence that we're meeting our requirements that we know today. We can even collaborate across our team, distributing the work so that everyone is working on a different component. This way, the system model emerges early, and we can use simulation to iterate and refine our requirements.
We also heard about failing early. If you step forward here, this is a place where a requirement problem can be found. Here, we're doing a develop and validate cycle that's very short time span. And once we've validated our requirements, we can then implement adding diagnostics, handling failure modes, making it production ready, improving test coverage. If we step forward, we're really setting ourselves up for success here. By the time you generate code, you know it's going to work.
If we go on to the second myth, that demonstrations must be done on product, virtual demonstrations are an effective way to show progress early in the development cycle, and really throughout the development cycle. The first thing that I do want to say is that a demonstration on vehicle does seem to align with the Agile Manifesto. You'll hear the phrase "working software" multiple times as you work through the manifesto itself and the supporting principles, but working is the key qualifier, and it really isn't reasonable to expect to generate software that works from a model that doesn't work. And so we really should insist on seeing the model work first.
By doing virtual demonstration of the architecture very early, if you step forward, we can actually demonstrate that early and then integrate it into a pipeline. We can actually start a CI pipeline, if we step forward again, with an architecture test that can fail early if anybody introduces a problem downstream. By doing that very early in the development, we have that as a baseline to be sure that the work we're doing later is contributing to moving us towards a production ready state.
Now, as we develop the models and the system model emerges, we can demonstrate system simulation, and demonstrate that functionality, and validate those requirements, again, integrating it into the next stage of our pipeline so that we can continue to fail early and fast if a problem is introduced.
At this point, we have validated requirements. We can elaborate that, get it ready for production, and go on to unit testing, and demonstrate that our components meet these validated requirements. Once we've done that, again, add it to the pipeline. By this time, the first two stages of our pipeline have become regression tests. And now we've got unit tests in there as well. Moving on, we can get to the code. We can do SIL and PIL testing. Again, add that to the pipeline. And finally, demonstrate on vehicle that this works.
So in my opinion, the virtual demonstrations are not just acceptable for agile model based design, but really should be thought of as mandatory.
Thank you Jim. I guess I like the premise of doing agile model based design, not model based design in agile framework. I have two questions for you. First one, so how does the continuous integration pipeline you describe impact collaboration either within a team or across teams?
So system model is a great way to collaborate across domains and across teams. It really provides a common language, so to speak. But there's really nothing more frustrating than updating your repository and getting an error in your simulation, when just a few minutes ago, you'd been able to run that model without a problem.
So a well-designed pipeline coupled properly with a repository such as Git ensures that any problem that's introduced by one user only impacts that user and doesn't impact anyone else. And so my experience has been that a team working individually on different components of the system is far more likely to have a working system model at the end of a sprint when there's an architecture test as part of a pipeline. That way, it forces you to fix any architectural problems you have before your model affects everyone else.
Thanks, Jim. So a follow up question on that. So you talked about later in the cycle, but how about early in the development cycle, when things are ill-defined. So when you're working with models, how do you prevent from breaking the pipeline?
So I think this is about proper integration of the tools. We just heard about that from Martin. Really, a build will break, but the pipeline won't, because that failure is going to be localized. So if I make a change to a model that breaks something in the architecture or in a system test, I'm forced to resolve that before anyone else sees the model that I've been working on. And so it isn't pulled into main development, it doesn't impact those other users. Again, a great example of failing early.
OK. Thanks, Jim. Thank you to you and all the other panelists for the presentation. We'll move on to the next segment, where we will ask some general questions. I'll ask you a question, and feel free to jump in. And if one of the panel members answers the question, feel free to add on to it, or you can give an opposing view, if you have one.
OK, my first question is, so I've seen companies develop their virtual vehicle strategy alongside agile. As we all know, agile originated from the outside the automotive industry, where simulation is not present generally. So the question is, how do you formalize virtual vehicle simulation as a part of your agile framework? Who wants to take this?
I can take this. So today, we are using the Vehicle Dynamics Blockset to realize closed loop tests on some HIL systems, explicitly in wet bench hills with dry holes to simulate leakage and test the basic monitoring and security systems or functions of our software.
Thanks, Martin. Anyone else who would like to add?
I can add on that. So this kind of goes back to the MathWorks 2020 Conference presentation we gave. Really, the way we're doing this, we call it almost like a virtual vehicle build factory, but it starts with the bill of models, which is basically, at this point, just an Excel file that lists all the different parts of your model for your vehicle, very much like a bill of materials would be. And we have these set up in GitHub so that anyone can pull them. And then our automation tool, using MathWorks, we also use Vehicle Dynamics Blockset in addition to some other vehicle dynamics tools.
We run this through our FASST, Forward Automated System Simulator Toolchain, and it basically takes this bill of model, parses it, brings in the network files like our DBC files, things like that, and builds a framework of what that bill of models, plus the network architecture, the CAN network architecture is, so it comes out like a virtual breadboard, in a sense. And then we start plugging in all the different model components. Based on your bill of models, you select the fidelity that you wanted for that. So you can select high fidelity, low fidelity, it just depends on what you're after.
But this all happens in just five to 10 minutes, where this used to take months to build one of these things by scratch. So now that we have this kind of automation tool that can run without a person-in-the-loop, now we can hook this up to Jenkins pipelines and do 30 virtual vehicle builds overnight, and hook those up to some test cases, and have them automatically run test cases and things like that.
So these are the ways we've been using these tools in an agile manner to basically hook up continuous integration with our bill of models and we'll build 30, 60, 100 virtual vehicles overnight. Now, it doesn't solve all the problems because there's still a lot of things you need human-in-the-loop to do, like I was saying, exploratory testing. But it's one aspect where, once we get something down and we understand what test cases we want to run, it's very, very effective for taking menial simulation out of the human domain.
All right. Thanks for that.
I could comment on this as well, actually. From a little different angle. Perhaps we don't have to formalize it. Consider a team or a department developing electric motor. They can work with what we call SIL, software-in-the-loop, and you maintain and you build up a plant model for that simulation and those SIL tests for years. This works perfectly. You don't have to formalize anything here, it just works perfectly. It's awesome.
But the battery department, they need another method, which is also specialized. So I believe a lot in this sort of agile way of letting different subsystems in a car develop their own methods. We need a certain freedom. And on a complete level, let's say that for AD functionality, you need a completely different model, perhaps using other aspects. I see MATLAB all over the place, sort of, but in very different ways.
So I sort of embrace the complexity of the system. And I would like to see this driven by engineers in many ways, in different parts of the company.
That's an interesting take, John. Thank you for that.
All right, so I have another question. Again, I'll throw it out to all of you and let anyone pick the answer. So we mainly discussed application of agile methodologies within your respective companies. We all worked with partners, suppliers, and OEMs. So have you had any experience implementing these across an OEM and supplier? Any do's and don'ts? What are the tips you have? Who wants to take this?
So maybe I start again. So to be honest, today we do not really exchange models with our customers. So we do not have to focus on the whole car. We focus on our brake system and our software-in-the-loop solution for that brake system. Yeah, so more and more customers want us to exchange models, models of different functionalities like traction control or the stability control or whatever.
And today, we do not have a solution for that. So we are investigating some tools like the web observer with Simulink simulations behind some graphical user interfaces, just to make it possible for the customers to play around with the models and become an idea what they do in detail and all that stuff. But today, we do not exchange models itself. We only exchange the code.
OK.
I can offer perspective as well. I don't typically do a lot of this in my role. I know there's experts other than me that do a lot of this within Ford. But my perspective on it is that I think we spend a lot of time trying to get what I would consider implementation models from our suppliers that are basically replicate of their code. And what ends up happening with these, they come as kind of a black-box S-function. And they're limited to a specific solver or specific time step. And it goes against pretty much everything I said before this, in terms of being flexible, because I don't know what's inside it, I can't really understand it, and it doesn't really swap. I can only put it in one simulation. And usually, when you drop those into a bigger system simulation, there's some kind of solver error or something like that.
My perspective is I'd rather have a whole bunch of, let's say, simpler behavioral models or what have you. And then when we go to hardware-in-the-loop testing, my implementation model is the software running on real hardware. And now, if I've got five or six different modules, never do all five or six different hardware modules running with software work at the same time. That doesn't happen until we say so.
So I will swap out the hardware for my simpler model, so I can test some of the modules which has partially working software. But I use this hybrid methodologies to get to a point where we can test what's available on that software, because software, throughout the development cycle, it's 60% there, 70% there. But having models that I can use to run, do quickly fail fast, learn fast, adapt fast, that type of thing, allows me to be much more agile in terms of software testing versus just trying to do everything, putting everything and trying to test software on a HIL bench where nothing works at once. So that's the area I like to focus on.
OK. Thanks for that perspective. Any others, John or Jim?
Yeah, I could add something. I've seen so many examples of tier ones that tries to give us models, and I feel like usually those models don't work because they are simplified in a manner that we don't understand. But it's very efficient that we try to build a model of the supplier's sort of thing, it's the code they deliver or the device they deliver. If we build up that model in parallel with changing our requirement to them, then we build up understanding of their device. That works.
So we own that model of their device or code and combination of device and code. So it's a little different. It's a tweak on that. And you need time for that. You need to work with the supplier in an ecosystem maybe a few years, actually. Then you have an awesome model of their device and their code working together with your code. So yeah, you can solve that. But it takes some time.
OK. Thank you. Thank you for that. So I have one general question and then there are several questions from the audience. We'll move on to that. My question to each of you is, what one thing did you have to adapt or modify to incorporate agile into your framework? Because we know agile comes from outside, but what one thing did you have to change?
I'll take that. I actually mentioned in my talk already. But when we first rolled out model based design with agile, we didn't do that integration early. And we saw a lot of teams really struggling with that. And we saw a lot of teams doing a lot of rework. And so making that change and saying the first thing we do is integrate, really made a big difference in the ability to get value out of model based design.
I think I have an important comment there, also, about doing model based design on agile. You have to have small models. And I saw that a comment here in the question here, that if you have small models, and the whole control system is made of many small models, then you avoid the different merge, the three part, the different merge problem. You can actually have individual developers working on each model, and you avoid this humongous model where everyone tries to commit things. It doesn't really work. You avoid that easily by keeping the models quite small. Every component is a model, one file. I think it's a very successful method.
That's a great tip, John. Thank you for that.
So may I add just one sentence to John's comments. So yeah, we really had a lot of trouble building up big, big models. And it's just not working. So today we are focusing on, let's say, software component modeling. And we try to release each software component and provide a Jenkins Pipeline for each component. So you can think of a component like, unit test level, or the order source of the component level. And that works very well. So we can just run the pipeline for each software component and the full model is built out of previously tested and, let's say, documented and released software components.
OK, excellent. All right, so we'll move on to the Q&A part from the audience. Nate, this is to you. What is the difference between your behavior and requirement models?
Yes, this is actually one of the most prevalent questions I get, because that's the toughest one there. And if I go back to the analogy, to CAD modeling, your skeleton model is kind of your hard points model, where you just put where the interface is. Your behavioral model is where you start to add in pushing rates, things like that.
So a behavioral model in the software world is going to be giving you-- When you start putting in some of your behaviors, a lot of times, they use a lot of state machines here, because state machines are really good, and state charts are really good at describing behaviors in the way we think about them mentally. So you're trying to mentally model the way you think each part of your system should work. And really the outcome, the goal of the outcome of this behavioral model session is really to get a system that runs with all the different parts operating in their behaviors, and getting the emergent behaviors out of it, so you can test to see that your system you're designing actually works.
So to me, behavioral models are much more of a systems engineer activity. Requirement models is much more of, let's say, the component level or the-- If I'm a steering engineer, I'm writing a requirement for steering software that I'm either going to write myself, give to a software engineer, or hand it to my supplier. That requirement model, I'm building to make sure whatever spec I'm writing on paper, my requirement model is basically checking my spec, so to speak, so that what I'm writing on paper is actually going to work in a real model.
Or if you're doing a model of master and your requirement model is your master, designing something that can actually be pretty close to, you can hand it to a software engineer, they don't really have any questions about things. So you're going to include a much higher level of detail. Very much like in a mechanical part where you have a print. You could actually take that and give it to a supplier or a machinist, and they can go manufacture that part because you put enough detail into it. So I hope that helps flesh out the difference between those two.
But honestly, in the real world, a lot of the models have blends of both. We have some behavioral aspects and some requirement aspects. It just depends on the model we have, versus what we're trying to do.
So if you get to 90% of the behavioral requirement model, you're happy about that, right? You don't want it to be 100%.
Yeah, and everything is always a trade off of time. Again, getting that just right model, I could spend a lot of time fleshing out the requirements side of a certain model, but given my current goal, maybe I can get away with the behavioral aspect at that point.
Yeah. Thank you, Nate. I just want to make sure that I let everyone know that we have lots of questions that have come in. Obviously, we won't have time for all them, but what we are doing is taking the most often cited questions and then posing them to the panelists.
Next question. This is to any one of you. How do you get your organization to avoid the allure of open source and take a coherent model based design approach? So I think the premise there, probably, or the context is if you go in many different directions, teams can work together. I think that's maybe the point I'm trying to take from that.
From my point of view, the only way you can achieve that, is to provide a bulletproof tool chain that helps your developer achieve their goals and make their daily work as easy as possible. And that's the only way they will not look for free stuff and open source tools to enhance something. But that's only possible by involving the developers, though you have to jump in discussions with them, go through the pipeline with them. How can I help you? Where can I help you? What can be improved?
Yeah, it's challenging, because it will end up implementing the do my work button someday. The more you can do for your developers with good tooling, it will help to avoid this third party stuff.
OK. Yeah. That's really good. So basically, you're saying make sure the barriers to using model based design are removed. And one other thing that I have seen, personally, is that if there is strong strategy in the company and a vision, that also helps.
I want to add to that. Definitely one of the things that I always strived for at John Deere was to make it easy for the developers to do it right. You make it easy for them to make good decisions in the development cycle. And if you make it easy to overcome some of those barriers, both real and perceived, to model based design, and they start to have success, then that's going to build.
All right. I would like to agree there as well, actually, because I see a lot of open source tools in our CI chains. Let's say Zool is used a lot now and it's very, very efficient. But if you can create alignment in your different organizational parts there, and use open source tools, it can be fantastic.
All right. Excellent. Thank you. Martin, this is to you. The question is simply, what metrics have you developed?
I answered that in the chat already, I think. There's a quite brilliant paper provided by the MathWorks. It's about model quality objectives. And we just use some metrics that are mentioned here, like how many blocks in one view, and all that stuff. And of course, we use the Simulink metrics dashboard to show what's going on in our models. And we use the model testing dashboards to analyze all that stuff about implemented requirements, test coverage, requirement coverage, and all these things that are visualized there.
OK. All right Thank you. OK, next question. This is for any one of you. How have you made your component testing agile ready? What kind of tools and strategies do you use? So basically it's making these components to be agile ready. I think one of the things, John, you mentioned earlier, which was to make them small.
The first, of course, what is the component? It could be a model. It could be a binary blob that you need to test. It could be an ECU or something, so that maybe one has to define the question. But this is to great extent about testing. It's super efficient with virtual testing on a limited level. Don't complicate it. Don't take the whole damn car. Limit the scope. Work over time. Create quality fidelity in your testing, then it will be crisp and awesome. So this is a lot about the framework that you create around the component.
OK. Excellent.
I want to add that I give credit to one of my colleagues, Kevin Martin, and his team. I haven't done a lot of work in this area, but they are doing a lot of work, really fantastic work in this area for the component level. Similar to how we do automated vehicle builds, they're doing stuff where they're taking components and running those two Jenkins and making sure that they pass all these preset checks that they set up before they're allowed to be released into the bigger GitHub community for others to use them.
So I don't know a lot about that, but I know that he's been doing that work, and it's something that's been very helpful for our organization, to have, we call them, these CRC checks, making sure that the components are ready to go. So credit to Kevin and his team for that.
OK.
Yeah, maybe one addition. So I have a slightly different perspective than John and Nate. So it's not the whole car. It's only the brake system. But even within the brake system, we try to set up the GitHub repositories in a way that we have testable units in there, and then combine these units to the full software.
So for example, if you have a monitor that's monitoring brake fluid leakage or air in the brake fluid or whatever, that's a software component. It may be consist out of a handful of model and model references, and it brings its own tests with it, and we try to build the full brake software out of these components and integrate over different levels up to the full system. And each level will have its own integration tests, then.
That's actually under construction. And it's a lot of work setting up all these Git repositories and take care about all the interfaces that need to be exchanged, and the maintainability of the whole thing. But yeah, we are working on it.
OK, good.
I'd like to add something to that as well. I think Martin has a really good point. And from earlier conversation, it's very easy to come to the conclusion that we have components, and we have complete vehicles, and there's nothing in between. But a function of the brake system that Martin just talked about, and then the entire brake system with all of its functions, each of these is a level of integration that adds value to your testing and adds quality to the end result. And then that becomes a part of a higher level system, eventually, to our whole vehicle. So I think it's important to look at the layers that we can build up and the value that we can get out of those as we go along.
All right. That's an important point about looking at the user level, the subsystem level, and then the whole system level. Thank you.
Sorry. We try to do this even for our physical models. So at the beginning, if we develop a new generation of brake system, we build up a physical model of the brake. And we try, for example, if we define a wealth or an electrical motor or whatever pump. Then in the future, we do not have that that, but in the future, we want to have, let's say, different models for each hardware component. That, for example, developer that's developing a leakage model for a valve can have a wealth with all details that you can model using Simscape or the other MathWorks tools.
And for, let's say, the guy who wants to just execute a brake maneuver, he's not interested in all the details about this wealth about a motor. So the motor should just turn, but it needs not to get warm or whatever. So we want to have different abstraction layers for each hardware component, and the user shall be able to build up his or her physical model that fits the needs of his software component. I hope that makes sense.
I guess with that, I think we'll wrap up the Q&A. We're coming to the close here. So again, thank you so much for your participation. Also would like to thank all the panelists for sharing their valuable knowledge and experience with all of us. Now, I'll hand it off to Vincent to summarize the key takeaway points and close the session. Thank you everyone.
Thanks, Vinod. Getting myself back in here. And my colleague, Anna Maria, will post the slide. So just a reminder of a couple of things. One is you can contact us afterwards. So you can contact most conveniently through Vinod and myself if you have any questions, then we'll forward your question to the individual panelists if you have specific questions for them afterwards. We welcome additional insight and feedback from you guys.
So this brings us to our close today. And it's really interesting conversation. So I just want to acknowledge everyone's contribution. Everyone, all the panelists come from different perspectives from car industry, from off road, OEMs and suppliers. And you don't necessarily see everything everybody says as 100% complimentary because in the tailoring process, every company has to consider what the starting point is, what they're trying to accomplish. So there may be some varying, even conflicting advice. So hopefully that starts a conversation in your organization. And again, let us know if you have further questions.
I want to take a moment to summarize what I saw in the Q&A process. So a lot of great questions. And as you can see, this area is very vast. And we have questions from the areas that were very comfortable about modeling, simulation, code versus model, a lot of those questions. For example, the questions about behavior models, and what they are, and how they differ from other forms of models. So those are great questions.
But we also took the conversation beyond just modeling and simulation, more so on how you set things up. That's an area that each company does today, but as we get into new way of doing things, figuring out how to mesh the tools and process together is becoming an increasingly important topic. So we had great questions regarding when you make the integration work, what are the metrics you use? How do you define the tool chain? What's in scope? What's out of the scope?
And finally, I would also say that it's very interesting that we also touched on the idea of how do you bring the right culture, right. So there was a question about model based design agile in the old environment, and when you encounter in the mentality of open everything, how do you address that. How do you get the right balance between allowing a certain degree of freedom in how you do things, and also meeting the objective of having a consistent way of working?
So hopefully, today's session provided some answers, some ideas, probably, more important than answers. And hopefully, it starts conversation, and we can go from there.
With that, I'd like to thank all the panelists, thank you, Vinod, running the panel today, and to thank everyone who participated. All right, so with that, we'll close the panel. Again, thank you everyone. I'll talk to you later.