Why Models Are Essential to Digital Engineering - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 24:59
Loaded: 0.16%
Stream Type LIVE
Remaining Time 24:59
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 24:59

      Why Models Are Essential to Digital Engineering

      Digital engineering is a trending industry buzzword. It's something that organizations strive to embrace and tool vendors claim to implement. But what is the practical reality behind the buzz? What are some of the essential aspects of an engineering ecosystem that actually provide the value promised? In this talk, Brian Douglas of Control Systems Lectures and MATLAB® Tech Talks, and Alan Moore, one of the original authors of SysML and co-author of “A Practical Guide to SysML,” discuss exactly these questions and show how models are a central and essential element of digital engineering.

      Published: 19 May 2022

      All right, welcome, everyone. My name is Brian Douglas.

      And I'm Alan Moore. Brian is a control systems engineer at Engineering Media and one of the content creators for the Matlab Tech Talk series.

      And Alan is an architecture modeling specialist here at MathWorks and has extensive experience in digital engineering. He served as the language architect during the development of SysML, and then, with Sandy Friedenthal and Rick Steiner, authored probably the best known book on the language. At MathWorks, Alan has been central to defining the direction for the new System Composer product that embodies systems engineering workflows in the MathWorks toolchain.

      We're happy you all decided to tune into this presentation because, as you're probably aware, we're going to talk about digital engineering.

      [INTERPOSING VOICES]

      Wait, wait, don't go just yet. We promise this is going to be more interesting than it sounds.

      Yes, much. Rather than attempt to grapple with what exactly this term may mean, I'm going to focus on two digital engineering principles, enabling automation and maintaining an authoritative source of truth. While these two pillars are generally useful, they're especially applicable to continuous engineering methods like DevOps.

      And I'm going to talk about why artifacts I call semantic blueprints are essential to achieving these two pillars in an efficient and scalable way. This might be obvious since I helped create System Composer, but I also want to show why I think System Composer and the Simulink platform are the right tools to author semantic blueprints for systems and software.

      OK, cool. That sounds great to me, especially since I don't know much about digital engineering. And so along with everyone watching, I'm just looking forward to learning about it and whatever semantic blueprints are. So why don't you take it away, Alan?

      Thanks, Brian. To understand the benefits of these two pillars, let me start by describing DevOps and its embodiment, the infinity loop. It's called DevOps because the infinity loop accommodates both the development and operation of a system as complementary activities in one continuous process. In DevOps, there is knowledge gained through operating the system, and that knowledge, combined with any new stakeholder needs, produces feedback that is used to update the blueprint.

      DevOps is usually paired with agile development practices where the blueprint is always evolving. And this continuous evolution of the design is a challenge for document-driven engineering approaches. Digital engineering and its focus on automation, an authoritative source of truth, fits better with this philosophy.

      OK, so you're claiming that in order to adopt this sort of agile continuous development approach, we need to get away from the inefficiencies of documents and transition to semantic blueprints, which is more efficient since you're saying that it allows for automation. All right, I'm hooked. What exactly is a semantic blueprint?

      I think it's reasonable to assert that the primary goal of an engineering ecosystem is to develop, and, more importantly, maintain a blueprint for a system. By blueprint I don't mean literally a stack of blue paper printed with white ink, pored over by draftsman. I prefer the term to other words like specification because its definition is literally an artifact containing all of the information that is needed to build or make the system. And if that blueprint is machine interpretable, we can take advantage of automation. And the key to machine interpretation is semantics, hence the term semantic blueprint.

      So a blueprint is anything that describes how to make the system, whether it's something like a requirements document, a specification, or even a Simulink model. And a subset of blueprints contain semantics, which allow a machine to automatically do something useful with it, like check that constraints are met, for example.

      I guess the term semantic blueprint is new to me, but we have lots of engineering tools that do a lot of this interpretation already. And more than just interpreting the blueprint, some tools can even construct the physical system from the description. For example, we print mechanical parts directly from a CAD model.

      You're right that as engineers we use a number of engineering tools together on a project. These tools hold detailed blueprints, whose semantics support implementation. For future reference I'll call them implementable blueprints. However, today each of these tools is typically its own silo and they often focus heavily on implementation.

      So if we look at the consolidated top down view of the entire system design, there are gaps between each of these silos. Often this gap is plugged with documents, like interface control documents and technical budgets. And with documents, ensuring consistency and accuracy across system design is completely dependent on human interpretation. In an agile change-driven process, this means the demand for human interpretation is going to be high if we want the system design to remain consistent and correct.

      OK, I think I've got it. The issue is that having a bunch of different blueprints, like CAD drawings and software and electrical schematics, that are connected only through documents is going to be less efficient than having a single blueprint that is able to bridge the different engineering domains and use semantics to take advantage of automation.

      Yes, you're right. And in order to have a blueprint that can connect all of these different domains, it needs to be able to describe how the system as a whole is organized. This organization was one of the main goals when developing the SysML language.

      The fundamental concept underlying these languages are the component port connector paradigm, where components define their interfaces with a set of ports. These decompose into further components, whose ports are then connected to enable interaction. This has proved a very scalable way of organizing system designs as sets of pluggable components. A system blueprint authored this way is the basis for better integration of the domain-specific tools because it can represent those domains and their interaction at the right level of abstraction and in enough detail to support system design.

      OK, I can see how a model in SysML can organize the entire description of the system. But one of your key tenets of digital engineering is the desire for automation, and I don't see how that is achieved with just fancier organization. I mean, it kind of feels like this is just a slightly modified version of a document.

      This is where the Simulink platform comes in. Simulink is an environment for authoring various kinds of semantic blueprints. The type of blueprint that we're currently talking about is for high level system design. And this is what we had in mind when developing System Composer.

      With System Composer, we've coupled SysML with MBD concepts, like physical and continuous semantics, so that engineers can build a detailed semantic blueprint of their system and leverage automation. The Simulink platform is already well known for models of system dynamics, but recently I've been working on better support for models with static properties, like mass, cost, power, et cetera. Engineers can add these properties to their blueprints and System Composer can then build a model complete with Matlab descriptions of the relationship between the properties. This model is then presented in a tool that allows an engineer to perform analysis.

      So with System Composer and Simulink, I believe we are well placed, if not uniquely placed, to plug the system design gap. However, the features of System Composer also enable a second kind of semantic blueprint-- implementable blueprints for software. System Composer is a much better fit with embedded software standards like AUTOSAR, which share the component port connector pattern, than traditional approaches, particularly when scaling to larger software compositions. The internal detail of the components can be authored graphically using Stateflow and Simulink, or texturally using Matlab or traditional programming languages. And because the higher level system models use the same modeling concepts, they can be used to exercise the software blueprint.

      OK, I think I'm following so far. Since System Composer couples the organizational benefits of SysML with semantics, it produces an artifact of the entire system from which we can take advantage of automation. And by automation, you mean anything that replaces human effort with machine effort, whether it's automatically checking for logical errors or interface consistencies, or if it's automatically constructing a model, or even the physical system. Right?

      But we're only eight minutes into this chat, so I'm guessing you have more to say.

      Yes, because remember, automation is just half of this story. In the typical system development, there are lots of artifacts floating around. There are documents, models, data files, wikis, and all sorts of places where the design is captured, and some of which overlap, disagree, or are out of date. So how do we know which ones to rely on? This brings us to the second pillar-- an authoritative source of truth.

      You're a control systems guy, right? So as an example, you could design an attitude control system and describe it with a block diagram in a document, like a PDF file or even Visio. That file could be used as a description for the software team to manually write the embedded code, which can then be automatically deployed to the product. At this point, I'll ask you, which set of information is the blueprint-- the code or the document?

      I suppose both are the blueprint since they both describe the system. It's just that the code, I guess, is a truer representation since that version is closer to what is actually implemented on the vehicle.

      Right. But if you wanted to make changes to the system in the next iteration, where do you start? At the code or at the block diagram?

      I guess I see where you're going with this. I would make changes to the block diagram since that's what I understand as my design.

      But you just told me that wasn't the true representation. How do you know the software team didn't have to deviate from your diagram in order to develop the code? You also might change your diagram in ways that are incompatible with the code. This is a classic example where the block diagram is a better medium for doing design, but the code is the more dependable source of truth.

      OK, let me write down some things to consider so I don't forget them. The first is that we want to make the design the implementable blueprint itself and not have several human in the loop steps in between. And I suppose if I just used Simulink to create the block diagram, that would take care of this since it is the source for the implementation when I auto-generate code from it. So I can see how this is beneficial.

      Having an authoritative source of truth is particularly important when using automation. The machines doing the automation need to be provided with true information if you want them to implement systems correctly. It's not a good idea to lie to machines. At best, they might not notice and produce the wrong system. At worst, they might notice, get upset, rise up, take over the world, and send back killer robots from the future.

      Yeah, I saw that documentary. All right, an authoritative source of truth makes sense in this scenario. I would want to have the place where I maintain and modify my design also be the truest representation of what is implemented. I mean, it's definitely more efficient than having to maintain that truth in multiple places.

      But another problem I come across is that my control system blueprint isn't fully separated from the rest of the system design. It often partially overlaps with other blueprints that are developed for different purposes. I mean, for example, the control system would rely on, say an aerodynamics model to provide information regarding aerodynamic forces and torques at different operating conditions.

      And I suppose this produces a similar problem that if it's not an automated process, it takes human effort to make sure that these two models stay in sync. But beyond that, don't I now have the source of truth for aerodynamics in two places?

      Well, this is a perfect example of where you have two valid sources of truth that overlap in some areas. So which is the source of truth for the overlap? The answers boil down to authority. A single person or group would have the authority to make changes to the shared information, and those changes should be distributed to impacted blueprints.

      In this case, the aerodynamics team would have authority to change the aerodynamics blueprint. That information would then automatically propagate to the control system blueprint. At this point, both blueprints are valid since the source of the overlap as the correct authority.

      OK, right, that makes sense. Let me write down another thing to consider. When two blueprints overlap, it's important to know which one is the authority. So of course, here the aerodynamics team would have authority over aerodynamic changes, and the control team could then automatically pull in that information, but not be able to change it, thus keeping both blueprints in sync.

      But maybe there are less concrete examples of authority. I mean, for example, doesn't all of the authority for any change eventually just bubble up to the stakeholders in the high level requirements? So in this process, how does the person with authority to make changes in a low level blueprint authored in Simulink or a CFD program ensure that it is still meeting the high level stakeholder requirements?

      To answer that, let's take a typical example workflow using our requirements toolbox product. This isn't intended to be a requirements toolbox tutorial. I just want to show you how high level authority can be transferred down to lower level components.

      A stakeholder typically authors a system requirement in a natural language. In this example, the requirement specifies when a heat pump should be activated. The controls engineer can then link the requirement to the blueprint element that satisfies it, in this case, a Stateflow chart.

      Ideally, the requirement should also be related to a constraint on the blueprint that allows the claimed satisfaction to be assessed. This passes authority from the stakeholder down to the constraint on the blueprint because it can be asserted that if the system passes the evaluation of the constraint, then the requirement is met. The goal when authoring a constraint is that it should be falsifiable. That is, it should be possible to prove there are circumstances in which it doesn't hold.

      In this case, the stakeholder can offer the constraint using a temporal assessment. This constraint uses consistent semantics with the rest of the blueprint. So the truth it is expressing is unambiguous.

      If the falsification is automated, then when there is a change to the blueprint the ecosystem will tell you whether the constraints are still met. Using the links from requirements to blueprints, requirements toolbox can interpret data from Simulink test manager, which is actually evaluating the constraints, to show stakeholders how well their requirements are being met.

      In Simulink today, engineers can specify a range of constraints in their blueprints, like the temporal assessment I just showed, and property verification. And we're always looking to add more. Enabling automated evaluations to be performed using them is always central to their design, whether that be through proof-based techniques, simulation, or other forms of analysis.

      I've been working recently on the addition of sequence diagrams to System Composer based on this principle. A stakeholder can describe, say an operational scenario as a sequence of messages. We can then compare the results of simulation to the scenario to check that it conforms.

      OK, this all sounds good, but the types of constraints that can be evaluated automatically have objective values. But what about subjective evaluations that can't be automated, like understanding ride quality, or the subjectiveness of how a pilot feels about the vehicle handling?

      Well, the answer is kind of the same, except that many of these evaluations require a user to interact with the system and offer a subjective evaluation of the constraint. In this case, you can't get away from a person being part of the process. The user must be able to perform representative interactions with the system, which means that the human computer interface must provide a suitable visualization. We've heard of customers building full mock-ups of the front end of their system so they can perform these evaluations. A virtual car dashboard or cockpit if you will.

      However, visualization is irrelevant if the underlying behavior of the system is inaccurate, which takes us back to the importance of building models automatically from the authoritative source of truth, the system blueprint. As an aside, models are a powerful way of realizing a digital twin, which is a core concept in DevOps. A lot of effort has gone into reverse engineering digital twins of existing systems. But if digital engineering approaches are used during development, you already have the basis for an accurate digital twin.

      That is definitely a good byproduct of this method. My experience has been that digital twins were an afterthought, or at the very least developed solely for that purpose. It makes sense that it would be preferred to build the digital twin directly from the truth, the blueprint, which has been maintained from initial concept through development, deployment, and operations.

      So back to the constraints to express stakeholder requirements, does that mean that a requirements document is no longer needed? We just allow developers to maintain the blueprint as long as those automatic constraints are met?

      A requirements document is normally a contract between stakeholders, typically the ones who are paying, and developers. And some kind of contract is essential to the development process. However, this is similar to the control system design example from earlier. We have a document that is used to capture part of the design, which then has to be interpreted by people into another blueprint.

      So what is the status of any constraints associated with the requirement? Which source is the truth? Apart from this potential ambiguity, maintaining both sets of information and the links between them is overhead.

      The requirements toolbox goes a long way towards reducing this maintenance overhead today. However, what is really needed is for authority to be maintained in the blueprints themselves rather than via links to other artifacts so that they become the authoritative source of truth. Once we attain this goal, we will see natural language as just an early stage expression of a requirement, which is subsequently refined into something with semantics.

      Oh, let me write this down. Maintain authority in blueprint and not via links to documents. All right, please continue.

      Let me be provocative here and say that an inevitable consequence of DevOps and agile approaches is that the only development artifacts certain to retain authority across changes are the implementable blueprints, including any constraints that they contain. The retention of anything else needs to be justified. So you can see the value of using an environment like System Composer and Simulink to author the implementable blueprints for software, the language and supporting tools aid design, and the resulting blueprints survived the chop. But yes, eventually the requirements document will have to go.

      Yeah, I've worked on some projects that have been around since the '60s, and I'll tell you that more than once I've had to look at a 50-year-old document and wonder whether it was still valid.

      OK, this begs another question. Other than saving the one authoritative source of truth, which is the semantic blueprint, what, if anything, can be retained long term? As an example, in control design we might develop a linear model that represents the essential dynamics of the system at a particular operating point and then use that for linear controller design.

      We then have a nonlinear, high fidelity model of the system and environment for analysis and verification. Both of these control system models were often purposely built for controller design and neither is the implementable blueprint. Are you saying dump both of these?

      Well, how fond are you of them?

      They're like children to me.

      Better be careful in how I answer then. But seriously, this is a good example where you have candidates of source of truth that overlap with each other and the implementation. The key to a blueprint's continued use is, well, how useful it is. In this case, as I said earlier, from a design perspective, these system blueprints are probably the most useful source of truth you have.

      Ideally, some or all parts of a system model would be derived from implementer or blueprints. That way you would just recreate them each time you wanted to investigate some phenomenon. Of course, like your beloved offspring, this isn't always possible. And so some amount of human effort will be needed to make sure they still represent the truth in some useful way.

      Having the semantic blueprint in System Composer can help a lot though. For example, we can describe the two models as two distinct variants of a common system model with the same external interface. We can then ensure consistent behavior using a common set of constraints. And if we're using a DevOps ecosystem, the operational system can provide measurements to help validate the behavior of development models.

      OK, let me write down this last thing to consider. If we have to maintain different variants of the truth, we need to assert that they are equivalent, and preferably automatically by checking them against the same set of constraints. I think I followed your rationale up to this point. And honestly, this all sounds great-- in theory. We have a semantic blueprint that encompasses the entire system design.

      And since the blueprint has semantics, we can take advantage of automation to ensure constraints are met and to construct models and physical systems. And since the blueprint is the authoritative source of truth, we can be confident that any product that is created from it represents the truth. I mean, I like the idea. But what makes you confident that this is actually achievable in practice?

      I've lost count of the number of languages and acronyms that I've contributed to and used. Of all these, SysML is the closest I've found to a complete system engineering language. With System Composer, I think we have the right blend of SysML and MBD language constructs to offer semantic blueprints for both systems and software.

      I've touched on just some of the automation that these semantic blueprints enable, and also the steps we're taking to capture and support authoritative source of truth. What I haven't had the time to mention is help for creativity, such as syntactic and semantic checking, style guides, et cetera, and our ability to scale by moving automation to the cloud. Taken as a whole, these features allow digital engineering approaches to be applied using our tools today for both systems and software design. And we're continually extending our support. I get really excited during release planning when I see the new digital engineering features planned across our platform.

      I'm looking forward to the new features, too. I'm genuinely impressed with what has changed and improved every time there's a release. So what is coming? Can you tell me a bit more about what we're going to see from MathWorks on the digital engineering front?

      Well, we'll carry on taking incremental steps. We will continue expanding the scope of the semantic blueprints we have. And there's always room to improve existing automation and add new automation based on those blueprints. And then there's always scale, which leads to the cloud. I think the cloud technologies and the shift towards online workflows offers a possibility of an enormous expansion across the industry in both the scope and the reach of digital engineering.

      I'm particularly interested in the increased emphasis on cloud services, rather than file interchange, to enable multi-tool workflows. NASA's OpenMBEE ecosystem project is a good example of what's possible. Using a central SysML based repository, it demonstrated the feasibility of multiple tools collaborating using common services.

      I've been doing this for 40 years. So maybe I should be pessimistic, but I'm not. Proofs of concept, like OpenMBEE leave me hopeful that as an industry we can build scalable digital engineering ecosystems.

      But here's the thing. Building models from blueprints is at the heart of it all. And that's why I'm confident that we at MathWorks have a major role to play.

      OK, well, I'm looking forward to the continued progress towards this goal. It kind of makes me wish I was still doing actual engineering on a real project so that I could try it out and hopefully progress through a project with a lot less rework and overhead.

      All right, Alan, I think we're out of time. Thanks so much for taking the time to explain the benefits of models and digital engineering. And thanks, everyone, for sticking around until the end. I hope you've learned as much as I have. Enjoy the rest of the Expo.