Transforming Automotive Engineering - The Model-based Software Factory - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 20:48
Loaded: 1.91%
Stream Type LIVE
Remaining Time 20:48
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 20:48

      Transforming Automotive Engineering - The Model-based Software Factory

      Overview

      The automotive industry is adopting software factory approaches with automated Model-Based Development platforms to address fast-changing customer needs. These methods tackle challenges like managing software variants, handling legacy software, upskilling the workforce, and inconsistent processes. Key elements include early error detection via virtualization, simulation, and test automation. Join this webinar to learn how leading companies accelerate development cycles, reduce design risks, save costs, and ensure compliance with regulations (e.g. UNECE R155), ASPICE and ISO-26262.

      Highlights

      • Understand current trends for automation and virtualization
      • Leverage continuous integration and automation to speed up development
      • Use virtual validation with Virtual ECUs to reduce hardware dependencies
      • Implement processes to improve speed of software delivery and quality
      • Automate testing reporting to ease compliance with regulations

      About the Presenters

      Dr. Tjorben Gross is Senior Team Lead for the Automotive Application Engineering Team at MathWorks Germany. He works with customers in the areas of functional safety, cyber security and integrating Model-Based Design with DevOps concepts, including CI or the cloud. Before joining MathWorks, he was involved in different development projects at Fraunhofer ITWM. He holds a Ph.D. in Mathematics from the TU Kaiserslautern.

      Stefan David is manager in the Application Engineering department at MathWorks specializing on safe and secure Embedded Systems- and Software Engineering. His area of responsibility spans across MBSE, Verification & Validation, Rapid Prototyping, and HIL simulation, including CI automation. Before he joined MathWorks in 2007, he spent several years as freelance software developer and as consultant in formal- and static analysis methods. Stefan is an ISTQB-certified tester and holds a Dipl.-Ing. degree from the University of Cooperative Education in Mannheim, Germany, and a B.Sc. in Information Technology from the Open University in the UK.

      Recorded: 19 Nov 2024

      Hello, and thanks for joining us today to learn about the model based software factory, which is a key ingredient in the current transformation of automotive engineering, and actually, for building software defined vehicles. My name is Stephan David, and with me is Dr. Jordan Gross. We are both from MathWorks Application Engineering, and we'll be your hosts today. And this is what we have prepared for you. First, we'll have a brief look at the challenges to engineering organizations. Next, we'll give an overview of the benefits of model based platforms, and then we'll look at possibilities to automate and scale software delivery.

      Since the topic is really broad, we've picked a high level scope with some demos on important software factory practices and components. We've also prepared selected references so you can follow up with us about topics of interest. Let's start with the challenges. On the one hand, we see strong mechatronics engineering departments who build vehicle functions over years, but who now also need to master localization, deploying tests to infrastructure and cloud environments, and refactoring legacy software or other things.

      On the other hand, we see strong IT and software organizations that are confronted with implementing ASPICE ISO 26262, or various regulatory requirements they did not need to deal with before. Many companies that we've talked to are not fully up to that change yet. A pattern that we often see is that organizations struggle to get in quickly from the idea to a consistent implementation and scale it across the organization. If they fail, adopting the right skills for bringing both worlds together, it will become difficult to keep up with the pace of the market.

      Throughout the next minutes, we will lay out how a model based and simulation based software factory is key to achieve a connection among both groups and to stay ahead of competition. In other words now, Jovan, can you please take over? I think it's time for some virtually moving parts now.

      Sounds great, Stefan. I'll be happy to do that. It is well recognized in the industry that only with collaboration and strong partnerships, current challenges can be tackled successfully. One example of MathWorks partnerships is with AWS Elektrobit and Synopsis. Together, we've built a demonstrator that shows fast and high quality software delivery. The use case in this collaboration is a vehicle controller update to enable a sport plus mode. We showed an effect chain where a Simulink model is being used to model VCU and EMS software, and then automatically generate application code.

      This code is integrated with Elektrobit basis software and Synopsis server to run on two virtually ECUs in the AWS cloud. Communication between the components is established through some IP, and you can indeed see that the sports plus mode leads to a higher acceleration. This example demonstrates automation and virtual validation in a model based software factory, enabled through strong collaboration. Stefan, can you please give an overview of a more general example setup for a software factory so that we can better understand where the model based approach adds the most value?

      Absolutely. But before that, let's look at some proven benefits of model based engineering and simulation also for the software factory. Talking about value, here are several benefits that our customers are reporting by achieving cost savings, performance and quality gains. Just to pick one example from the list here, our customer G is reporting that they have reduced integration test time for their body domain controller by 80%. Integration test is a typical task in the software factory. This shows that the benefits of model based engineering practices are well aligned with software factory rules.

      Now let's look more closely what a software factory is typically based on. Core components of any software factory are built first on Lean, Agile and DevOps principles, with the goal to reduce time between committing a change and production. Secondly, process automation that covers key aspects of the development cycle, such as unit tests, static code, or security testing, down to deployment on the target platform.

      The goal is to have consistent, repeatable quality that has less manual work through reuse of existing components and quality gates, just like in traditional factories. While the software factory standardizes much of the development process, it is not monolithic. Here's a brief overview of typical characteristics and components that are important in context of the software defined vehicle. Typical components include requirements in software architecture, the software implementation, software integration and test automation, virtual validation, as we've seen in the demo by Jovan, and compliance and lifecycle management.

      Of course, that's just an example, and you can have different factory setups to segregate domains, regulations or geographic regions. And also, the software development is often organizationally not part of the actual software factory, but it's organized in the domain departments, or the modules are actually provided by suppliers for integration and testing. Let's look next at Agile development for high integrity software along the ASPICE V-model. That is a challenging topic for many, and one of the biggest bottlenecks for speeding up the release process, as software changes need to be assessed for potential impacts. You also want to retest only affected components to reduce cloud costs.

      We will demonstrate later how you can run dependency and change analysis, which is particularly important to reduce the effort for re-homologation and certification, but also, to reduce the size of the software image being flashed to the target. This V-model also shows the transformation that MathWorks made over the past years as we've added significant capabilities for systems engineering and software architectures in the past years. This way, MathWorks adds value across the whole V-cycle by providing vertical traceability and horizontal validation in the digital thread.

      Also, our code generation and static C and C++ code analysis products are certified for functional safety use cases since 2007.

      Along the whole development cycle, interfacing with tools from our technology partners or reusing existing components is key to fast software delivery. Probably, you have many components of systems and software already available and you would like to reuse them for future projects. Often, problem use components are rather reused than recreated. This is why models, FMUs, and cryptid models, or external C code can be integrated in your models and reused downstream, like for simulation based verification. This would be an example where variant management comes in very handy.

      At the same time, different tasks in your software factory will require different tools. Standardized APIs and interfaces enable smooth collaboration between the disciplines, easy reuse of existing components, and integration with other tools. With a model based software factory, you can leverage these and many more interface standards to better integrate and improve your flow. Let's now take a look at the left side of the V, since requirements and architecture are indeed often part of the responsibility of a software factory.

      With model based engineering, you can incorporate stakeholder needs, derive functional requirements, system requirements, down to hardware and software requirements, which are then used for creating the system architectures and software architectures. As for software development, an agile approach is also highly beneficial on the architecture level, and it can be easily implemented through early architecture simulation and automated analysis as basis for decisions, what needs to be refined and updated.

      The power of automated analysis becomes evident in the context of functional safety. In ISO 26262, one important aspect on the architecture level is looking at freedom from interference. Freedom from interference deals with potential issues that can arise from components with lower AISL sending signals to components with higher AISL. Looking at the logical architecture, you can see that the health monitoring system of AISL C classification is connected to a battery charging system with AISL B classification.

      Through automatically generating a freedom from interference report for the system, you can easily identify for which components countermeasures need to be applied. For the battery health monitoring system, you see on the lower right that AISL violations have been detected and need to be addressed. This highlights the benefit of automated system analysis and reporting. Worth mentioning, as well, is that we often see as a common approach the use of RFLP, meaning requirements, functional, logical, and physical architecture.

      These architectures are, of course, not completely independent from each other, and traceability can be established. I'll discuss later in more detail how you can benefit from traceability in your model-based software factory. I just mentioned simulation on the architecture level, and the idea is to build complete systems as simulation model so that you can easily integrate the software and test the whole integration. The complete system, of course, consists of different parts. For example, battery models including thermal behavior are currently of high interest in the industry.

      One of the great benefits of having integrated simulation models for different levels of fidelity is that different teams can reuse those parts and fidelity levels, which are the most suitable for their testing. At the same time, it's ensuring that all models and levels of fidelity are aligned. Early and often, simulating and testing your systems enables early error detection and faster cycles for fixing the errors. This is one of the reasons why model based engineering is more and more adopted in the industry.

      Once you have sophisticated simulation models available, they can be reused for different purposes. One example is hardware in the loop testing or driver in the loop testing, where you would integrate embedded hardware with models running on real time simulation hardware, for example, on Speedgoat test systems. In the context of automated driving, scalable cloud-based simulation of realistic driving scenarios, including sensors, are the only way for gaining a sophisticated level of confidence in the systems, while reducing highly expensive road testing.

      One key aspect of model based engineering is that software development, including early verification and testing, can be done already on the model level without even knowing the final production hardware. Once the production hardware becomes available, optimized performance and safety compliant code can be automatically generated for the particular target. After generating the code in the side by side view, you can understand where components of your model are automatically generated in the code, and which code relates to which components in the model.

      Of course, not only NCC, as in my example, but also target optimized code can be generated. Examples of supported hardware, which is commonly used in the automotive industry, are Infineon Aurix, Qualcomm Snapdragon, as well as FPGAs and NVIDIA GPOs. Traceability is not only relevant for better understanding your generated code. With model-based engineering, you can establish a digital thread from requirements, through the models, down to the code, and back to the requirements. Also, tests and coverage information is linked back to the requirements, and reports can be generated automatically for higher and lower levels of granularity.

      This results in complete traceability across the whole V-cycle. This traceability results in a digital thread, which is of great value for creating updates in compliance with cyber security requirements. You can analyze and trace the complete impact of a change in your Matlab project integrated with the version control system, and perhaps, automatically generate an SBOM, a software bill of material. This showcases another possibility to aid compliance through automation in your software factory.

      Not only for cybersecurity, but more general, for retesting and recertification, it is highly beneficial to know which parts of the system are impacted by your change. It can help to avoid a full recertification when hotfixes are done. When modifying the file, you can easily identify the impacted files. Within these files, you can see even the modified signals and the parts of the system which must be retested or are not impacted by the change. This knowledge of the impact of changes also speeds up the automation through an incremental verification of changes. This is where the Process Advisor can be very helpful.

      After implementing a change, you as a developer would run a pre-qualification check in the Process Advisor locally with all verification and validation steps that can be automated on the developer machine. In case issues are found, you can leverage the digital thread to trace them back to the root cause and fix them. Wouldn't it be great to now automatically run only those verification and validation steps which are impacted by the change, and not the others? That is exactly what the incremental analysis from the Process Advisor does and can save you with this a significant amount of time.

      As soon as you have ensured locally with an incremental analysis that all issues are resolved, you are ready to commit and push your changes so that continuous integration will take over. As soon as the changes are pushed to the remote repository, the CI pipeline will automatically be generated based on the test process defined in Matlab, and the jobs will run. When the pipeline finishes, reports will help all stakeholders to understand the status of the project.

      In summary, model-based engineering together with a digital thread and the Process Advisor enables you to automatically detect all artifacts, define the tasks for your CI pipeline based on those artifacts, and the sequence for these tasks. You can then build and test incrementally on the local developer machine to identify and fix issues early. Finally, automatically, the CI pipeline will be generated on the CI platform. When looking at CI, scalability in the cloud is key for your model based software factory. We see scaling virtual testing and simulation relevant for both interactive and automated use.

      Automation use cases like continuous integration, you perhaps would want to scale on the cloud or maybe on prem, leveraging containerization or scalable virtual machine sets. Axotech recently set up such a scalable CI infrastructure on AWS with different CI platforms. Speaking of different CI platforms, it's important to note that model-based engineering tools can be well integrated with any CI platform, may it be in the cloud or on prem.

      Additionally, the tools are platform agnostic so that you, as a cloud architect or software developer, can be faced with many options, which, indeed, is an area that we often working with customers on. On the interactive side, you may either want to use a cloud-based virtual machine for remote desktop, using the reference architectures that we provide, or maybe Matlab in a web browser. Please get in touch with us, and we'll work with you on the best configuration independently if you're using GitHub, Jenkins, Azure, AWS, or other services.

      A great example for interactive and automated use of cloud resources is the demonstrator, which I've shown in the beginning. And we've developed it in close collaboration with AWS, Elektrobit and Synopsys. We leveraged cloud-based virtual ECU testing on Power6 containerized environment on EC2 instances with EV Corvus Linux and Synopsys Silver. Also, we scaled simulations in the AWS cloud and set up continuous integration. With implementing these capabilities, software delivery will become much faster, and validation of requirements can happen earlier.

      Now, what are we offering as support when it comes to the change processes for getting from the concept to the actual software factory implementation? This is tough, particularly if you have a lot of legacy and redundant workflows, and given the many options at hand. We help you with benchmarking by assessing the current state and identifying priorities based on your objectives. Based on the outcome, we provide input for architecture refactoring, establishing standard compliance, CI and cloud integration, or enabling frequent toolchain or model upgrades.

      Maybe even more important, we help with realizing quick wins, which is crucial to build trust in the transformation, as well as for onboarding and upskilling of the engineers. Now, of course, the question is, what do companies say that have already partnered with us on their transformation process? We have selected a few reference cases from ZF, JTEKT, Continental, Rivian, and Zeekr that contain a lot of insights to build or improve your own software factory.

      Rivian, for example, shares how they've eliminated simulation request bottlenecks, reduced redundancy in simulation tasks, and increased simulation reuse. They have also maximized throughput by massively running parallel simulations on an AWS cluster, and across multiple cores or local workstation, dependent on the use case. I personally would recommend starting with the article from Zeekr, as their case well aligns with all of the topics that we have covered today. They have also implemented new capabilities for modeling service oriented architecture and generating C++ code, which is a key strategy for enabling rapid iterations and implementing alternatives to Classic AUTOSAR.

      Real world testing is expensive, and this way, Zeekr has achieved faster software release cycles. Zeekr reports that they can now conduct 90% of automated driving algorithm tests in simulation, 9% on test tracks, and only 1% through real world road testing. Please check online for more information. To learn more about all these practices that we've covered today, we've compiled a set of deep dive areas that you shouldn't miss. My favorites are the CI pipeline and cloud reference implementations that you can simply access and use, and the very complete video about software architectures and virtual integration.

      I recommend starting with this one, just because you'll have a hard time to parallelize and scale without the appropriate architecture in place. Let's do a quick recap now. Our objective for today was to give you a high level idea about typical concepts and software factory building blocks. We looked at the main challenges for adapting to changing requirements in traditional mechatronics organizations and developing automotive-grade software in compliance with standards and regulations. We then showed examples how model based methods can be applied seamlessly and from requirements and architecture phases, down to virtually release software in simulation.

      With this, we hope we found a good mix of topics for all of you. Don't forget that all this is state of the art. Reach out and team up with us to make your software factory transformation smoother and faster.

      View more related videos