Agile Behavior-Driven and Test-Driven Development with Model-Based Design - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 27:35
Loaded: 0.59%
Stream Type LIVE
Remaining Time 27:35
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 27:35

    Agile Behavior-Driven and Test-Driven Development with Model-Based Design

    Model-Based Design and the agile practices of behavior-driven development and test-driven development play an important role in modern, software-intensive, large-scale development projects.

    Published: 7 Aug 2023

    Now, welcome to the next talk about agile Behavior-Driven and Test-Driven Development in Model-Based DESIGN. So I'd like to start on the agenda first with some definitions of behavior driven and Test-Driven Development to explain what it is. And as you will see, requirements are playing a very important role in this matter. So we also focus on some best practices of how to write a good requirement.

    And of course, this leads us to the question, how do we apply Behavior-Driven and Test-Driven Development on Model-Based Design? And apart from some generic considerations, we also have an example of how to apply especially the Behavior-Driven Development in Model-Based Design. And actually, Test-Driven Development is quite similar to Behavior-Driven Development, as I will point out. So in the end, I hope that you will get an overview of how to apply Behavior-Driven and Test-Driven Development in Model-Based Design.

    So, first of all, the problem statement-- maybe you're familiar with this situation. You have some textual requirements that are often incomplete, inconsistent, incorrect, and ambiguous. And that's just one issue. And the other issue is typically that testing often happens very late in the process since, in the end, you don't have so much time for testing, so it's kind of cut a little bit shorter.

    But the problem with these two issues is, of course, that you will discover any potential problems in the requirements or any other things that have been done or committed in the early stages of your development cycle too late, which leads to additional overhead, additional work that you need to do. And of course, the question is now, how can we prevent this?

    And the Scaled Agile Framework suggests to test-first approaches that are coming with some kind of built-in quality. And these exactly is are the two Behavior-Driven and Test-Driven Development approaches.

    These approaches are not exactly new. They were not invented by the Scaled Agile Framework. They existed before. They were also already practice with Model-Based Design.

    But maybe them under a different term because this terminology was defined by the Scaled Agile Framework, but the methods have been around for much longer. So maybe you're familiar with them under a different name.

    So question now is, how can those two Behavior-Driven and Test-Driven Development be applied in the context of Model-Based systems engineering, as well as Model-Based Design? So you can do it really on both levels, systems, and on the software level as well.

    So, first, some more definitions. I guess you're all very well familiar with Simulink and Stateflow, for example, since you're attending this event. So not talking much about these ones. Important, of course, is that these ones are even referenced in safety critical system development standards, such as ISO 26262.

    And the best benefit maybe is that here, engineers are able to come up with very complex systems because here we are focusing on a much higher level of abstraction, as opposed to, for example, using, or directly writing, hand-written code. So this way, we have a higher level of abstraction, are able to design much more complex systems.

    Also, the possibility of system simulation allows you to do some early validation of your requirements. So that's another major benefit. And of course, you can generate documents and production code automatically, which saves a lot of effort and eliminates manual error possibilities.

    So let's summarize those benefits in a benefit hypothesis, in a kind of agile framework manner. Maybe you're familiar with this. So the statement would be-- "I believe that model based design with system simulation and production code generation will result in increased productivity and efficiency in large-scale software projects as measured by correctness of requirements and quality of generated production code."

    So now for Behavior-Driven Development-- what is Behavior-Driven Development? So the Scaled Agile Framework defines Behavior-Driven Development as a test-first Agile testing practice that comes with some built-in quality by defining and potentially also automating test cases before or as part of system behavior description.

    Moreover, also BDD is a collaborative process that creates a shared understanding of requirements between the business and the Agile teams. But what does it mean? Actually, requirements elicitation is really a team effort. It requires many roles to come together, several stakeholders to discuss and come to a common understanding of what needs to be done, why it should be done, and when it will be good enough.

    And it's much, much easier to see what behavior is needed if you have some examples in action that helps you to come to a-- with the help of system simulation and also this new approach of Behavior-Driven Development that allows you to have an effective way to get to a mutual understanding and to a baseline of requirements with clearly-defined acceptance criteria.

    So again, let's summarize the benefits. And here, the benefit hypothesis would look like this-- "I believe that Behavior-Driven Development in the combination with system simulation capabilities of Model-Based Design will result in built-in quality as measured by the efficient elicitation of correct system requirements and early validation of correct behavior of the systems."

    Next, we could come to Test-Driven Development, which is actually following the same philosophy as Behavior-Driven Development. Also here, we have a test-first approach. So we're building and executing tests before implementing parts of the system.

    And the nice thing about Simulink is that this is a powerful environment that allows you to easily create test harnesses to specify assessments that are automatically checked for price-side criteria that you can easily mock environments and, for example, provide plant models in a more or less detailed manner, depending on your needs; and that you can reuse the test later on also for the generated code.

    There's many more benefits, but let's just summarize those ones in the third benefit hypothesis. "I believe that Test-Driven Development in the combination with the executable specification simulation capabilities, model-level verification, and code generation capabilities of Model-Based Design will result in built-in quality, as measured by verified and correct behavior of models and generated code on a component level."

    So where are Behavior-Driven and Test-Driven Development located on the development cycle? As you can see here, Behavior-Driven Development is typically located on the system level, so on a higher abstraction level. While Test-Driven Development takes place on a component level, where you have some more details of the implementation already.

    Very important for both of them is that you have requirements, testable requirements, with clearly-defined acceptance criteria. This is key, as you will see in the following slides.

    So now, for the best practices, how can I write good requirements? The Agile community recommends, or has some recommendations for best practices, of how to approach the definition of such a requirement by using three steps, starting with a benefit hypothesis.

    So this is about the why, why something needs to be done, to communicate business and customer relevant information. So you can formulate it in a way that I believe that some capability will produce some specific outcome as measured by some specific metrics. That would be a benefit hypothesis-- very similar to the ones that we have seen already before.

    To refine it further, next thing that would be very helpful is a user story. It looks like a statement like, "as a specific role, I want to do something specific so that a certain goal is achieved." So this gives some more context information and specified what has to be done, in combination with the benefit hypothesis.

    But there is one more thing missing, which is most important. It is the acceptance criteria which allows later on to test it and automatically decide whether the criteria has been met or not. So here we have the following scheme-- this given when/then scheme, like, given a certain precondition when a certain trigger occurs, then an expected result also has to be exhibited by the system.

    So this communicates some quality assurance criteria and defines exactly when something will be good enough. So we are able to decide when we are done with our development at this point.

    So let's take an example to see how this works. The example we would like to look at is the vehicle shall support DC charging of the battery. That's a requirement, but it's very vague. It's incomplete. It has no clear acceptance criteria. So, nothing that is testable that we can use at this point.

    So we need to do some further refinement. A small refinement would be if we would add the information that fast charging of the battery-- this means that we need to be in the range between 50 and preferably above 300 kilowatts. This makes the requirement a little bit more specific, more clearer, but still it's not testable. So we need many more details.

    And that's exactly the point where the user story and the acceptance criteria are helping. So the user story helps to put the end user into focus and gives more context. Like here in this example, "As an electric vehicle driver, I want to have the option to charge my electric vehicle fast so that I can get additional driving range in a short amount of time."

    And of course, next step would be to provide an acceptance criteria, which is now a little bit longer here because it contains all the different details about the information like compatibility, physical, preconditions, and stuff like this. And what has to be done with respect to if we are plugged to some charging station, we need at least to have enough power supply to travel-- to be able to travel another 100 kilometers.

    So this is the acceptance criteria. And once we have formalized those, it's much easier to really derive some formalized requirements, some constraints out of this, that we can use further on for testing purposes, for example. So we will revisit this example in more detail later on.

    So next question-- how to do Behavior-Driven and Test-Driven Development on Model-Based Design. So again, as mentioned before, both approaches following a test-first approach. That means we need to have clear requirements and clear acceptance criteria. And this requires also to be able to talk about the different interface signals, like inputs and outputs of a system, that makes up the statement of a requirement.

    So we need a minimum architecture or a design with some signals to talk about in the requirement. And only after this, we are able to also start creating test harnesses and formalize or automate assessment criteria to execute the system for testing purposes and automatically analyze the testing results.

    So next, I will present a process that you could follow on a system level and on a component level to perform Behavior-Driven and Test-Driven Development. So in the first step, we start with phase A, analyzing system requirements. This is exactly about capturing and reviewing requirements the way we have outlined here before.

    After this, once you have your requirements, already talking about some interface signals and stuff like this, you create the architecture that has already some details. And you can allocate the requirements to and review the architecture together with the allocated requirements to see whether all the interface signals are available, for example.

    Next, we have a split in the hierarchy depending on the hierarchy level. On the left hand side, you see behavior. You will see Behavior-Driven Development on a system level. And on the right hand side, you will see Test-Driven Development on the component level.

    And the process now looks as follows-- as a next step after you have defined your requirements, with clear acceptance criteria again and coarse-grained architecture, you can already start to write test cases, although you have no implementation whatsoever of the components, for example. But here you start already writing test cases. And it's very typical that at this situation, you notice that maybe with a requirement, something is missing, or maybe some interface elements in the architecture are missing.

    So it cannot be ruled out, and it's actually very typical, that you need to go back to start phase A to complete, to iterate on the missing elements until you are really having a complete set of consistent requirements. So you're iterating here. Just by writing test cases, you're discovering already such issues.

    So once you have come up with all the system test cases that you need on this level for the architecture you define so far, next we would go to the component level because now we have a system with kind of empty system component boxes. And now we need to continue on the component level.

    So, same approach-- you would start to write test cases for those components that you still don't have, but you have the interface description. You know exactly which signals are going in, which signals are coming out. The requirements are talking exactly about those signals. So you're already able to define the test cases and also define the acceptance criteria and automate this already.

    And that's what this phase D is all about. So once you've done this-- by the way, again, very important, test-first approach. You need to make sure, for both phase C and phase D that the test cases you have set up so far and your automation of pass/fail criteria with the assessments, that they are failing if you have no implementation for obvious reasons.

    So, having this situation, we can then continue to phase E to create a detailed component design. And the important point here is that you don't need to complete the step until you're ready for production code generation, but just as far as needed to make the test cases pass.

    At this step, you can already stop, worry about all the remaining details later on. Because once you reach this point, you can already go back to the system level and just rerun the test cases that you have defined for the system level, because now, meanwhile, all the system components that were needed or part of this system have now been defined sufficiently up to this level that at least the component test cases were already passing already. So we're now rerunning the system test cases, including the rudimentary behavioral description of all the components. And then we can see whether now everything passes still on the system level as well.

    The main point here is that with this procedure of writing test cases first, we can make sure that we have correct requirements, consistent, complete ones. That we have some confidence in the design and the architecture; and already ruled out, typically errors made in the previous stages. So we already avoid doing unnecessary work later on because we met already the things right here to this level. So, gaining confidence in what we did so far is the key here.

    And we can even increase the confidence by going to the next step, phase G, by performing rapid prototyping with real hardware to also consider real-time aspects. So once we've done this, only then later on, then we can go back to the component level and continue or finish the steps needed to develop the components.

    And here we have now all the details of, for example, doing some refactoring, optimizing the components, running modeling guidelines, and preparing them for code generation. At this point, you can even add additional test cases just to increase the coverage, for example, or to test for extreme corner cases. This can all be done here. And the result then could be also that you're generating code, which would help you in the next step on the left hand side, system level again, to run the system Software-in-the-Loop tests.

    So now, really using the Software-in-the-Loop. And we are reusing the test cases we were defining in phase C. So we are just reusing them. There's nothing new to be defined here. We're just reusing it making sure that, also in phase H, we were not introducing an error or something like this when we did all these optimizations refinements, whatever.

    So again, the whole thing is about gaining confidence in the architecture design, the requirements to make sure that whatever problems we might have, we're encountering very early. And this way, we're avoiding all the unnecessary work that we would need to do otherwise if we discounter anything very late.

    So let's take an example here-- again, the DC fast-charging example. First, we start with the use case diagram in that example. We will have a driver as a primary actor in this use case diagram. And the driver, of course, wants to charge the batteries fast.

    The secondary actor would be the charging station here. And the use case is here. In that case, the charge battery fast requirement, which we now want to subsequently, again, refine and define acceptance criteria for, based on the steps that we've seen before. So we will start with the benefit hypothesis to answer the why.

    And here we are taking the vehicle manufacturer's perspective, saying that, OK, if we have more flexibility in charging for our vehicle, it's more likely that we're also selling more electrical vehicles. So this is the statement here of the why.

    Next would be the user story, what needs to be done. So again, as an electric vehicle manufacturer, I want to integrate DC charging stations with my electric vehicle charging system so that the vehicles can charge fast, safe, and efficiently. So this is what needs to be done.

    And now we need to have the details about the acceptance criteria. When is it good enough? And here we need all the details about the physical aspects, like, is it compatible? What's the battery temperature? What is the state of charge? For how long does the plug need to be connected in order to get to a high charging power of at least, in that case, 300 kilowatts?

    So this would be a refined requirement, which is already well-suited for the next step of formalizing it to automatically use it, for example, for testing purposes. So how can we proceed if we have now, not just this one, but, of course, many requirements of this kind with well-defined acceptance criteria?

    So here you see, again, how we proceed through this process. We analyze the system requirements, and we want to work in a structured way. So we suggest to do it like this-- we create a MATLAB project using Git for version control. And we are creating a requirement set like we did before, but now in a formalized way.

    The result would be a requirements model that we can use for testing purposes. That we combine it with a virtual prototype model to set up the system test or component test, depending on which level we are. And this allows us to do some systematic analysis. And let me show you how this works.

    So here you see example. We created a requirement set in the Requirements Editor. You can also do it in a third party tool, of course, and just import it. We have capabilities of keeping track of changes and do change impact analysis, which is important in that case. You see that here we have already some requirements with user stories and acceptance criteria. For example, exactly the requirement that I was introducing so far.

    So we are now in the next step defining the system architecture to start writing test cases for this. And here in that case, you could use a conceptual, model virtual prototype. Or, in this example, which was very easy, we happen to have a nice shipping example, a reference example that is called DC Fast Charger for Electric Vehicle in the Examples section of our tools. So we're starting with this one, adding some components, the requirements model. And this is the example that we want to continue with.

    So, next step-- writing system test cases. We have defined the requirements already. Now it's about the test cases. So what is a test case?

    We need the system under test, of course. We need the input simulation. And we need the assessments to automate pass/fail criteria observation. Exactly for the assessments, it's best to formalize requirements, to put them in a way that we can analyze it automatically with our tools, like, for example, as you will see soon, the Simulink Design Verifier, which can do the job.

    Then it looks like this-- we have a table that contains all the different requirements, and it's allowing you to keep track of everything in a structured way. We have the summaries. We have the preconditions with all the details about compatibility, temperature, state of charge, duration of how long we need to be plugged to something until the post-condition has to be met, which is we need to charge then at a value of above 300 kilowatts. So this is a nice formalization of a requirement that we can use directly for testing purposes, but also for analysis purposes here.

    Talking about the analysis purposes first-- as I mentioned, we can use Design Verifier with such requirement tables to check whether they are consistent and complete. Consistent means are there any contradictions between the different requirement rows, requirements? And is it complete? Are all situations covered? Based on whatever input values are coming, is it always clearly completely defined how the system should react?

    And here you see a possible outcome. In that case, it says we have incompleteness issues. It gives us one example of the valuations of all those inputs, like, again, compatibility temperature state of charge and stuff like this that is not covered.

    In this example it's exactly the case where we have the question, what happens in the first 100 milliseconds? We didn't specify yet what to do in the first 100 milliseconds before we are charging with the value of above 300 kilowatts. So obviously, stuff like this is missing. And it would require us in the process overview to go back to phase A to complete the requirements, iterate here, until we have a complete set of requirements.

    In that case, we're making it much easier now. Simple solution, we are simply saying otherwise we don't care. But of course, in your examples, you should really come up with a complete set of requirements, making sure consistency and completeness is achieved. And as you can see, you can always easily analyze it with the tool.

    So next thing, we have now formalized the requirement. We can set up the test case by using this formalized requirement, having some stubs to set up an easy testing environment-- in that case, important. We're assuming the charging power is zero kilowatts, which obviously makes our requirement fail following the test-first approach. If we're lacking implementation, it has to fail. So this is just showing that if we're running the simulation, on the right hand side, you see the red dots marking the points where the requirement failed, as it should following the test-first approach.

    So now, we're skipping the part for the Test-Driven Development. But just to quickly outline it, once we have finished phase C, we would now do the same thing on the component level as well-- writing the test cases, setting up the test harnesses, using formalized requirements to set up the pass/fail criteria monitoring so that we can easily run the test cases that are failing in the first step and in phase E; again, then specifying as much behavior as needed, but not more, to pass-- to have the test cases pass.

    Then we have already some behavior for all the components of our system. And we can switch to phase F and rerun the test cases that we have specified before on the system level to see whether they are also here passing now, gaining confidence in our requirements design as mentioned before.

    So how does it look like? We now have the virtual prototype and some more components that were exactly created in this Test-Driven Development approach on the right hand side as I just outlined. And here we have, again, our requirements model.

    The assessments-- it's the same thing you saw before-- this is a table now wired to the virtual prototype. And we can now execute the system tests and see on the picture on the right hand side that now the test, the example we are focusing on, is passing. And hopefully, the same thing is true for all the requirements that you should have treated the same way.

    So, this way, we have now made this pass. This was stage F. And for time reasons, we are skipping all the remaining steps of next going to rapid prototyping, finalizing the components, making them ready for production code generation, all that kind of stuff, running SIL test, SIL test. I guess you can imagine how this looks like.

    So we are now coming to the end of the conclusion of what we here just saw. Basically, Model-Based design fits very well with Behavior-Driven and Test-Driven Development as you saw in this example. You first need to refine the requirements, make them complete, consistent, correct, and very important, testable.

    You need some test cases that are first failing. If you are lacking implementation, that's the test-first approach. And this way, you can not just catch missing, but also wrong functionality, hopefully.

    And you would use virtual prototypes and local test benches for components to validate your requirements the way we saw here. And the whole thing is about gaining confidence in your requirements. That they are correct, complete, consistent. Gaining confidence in your architecture and designs before going to the details, before implementation, before doing work that might be otherwise unnecessary, because we would need to redo it if we made a mistake in the beginning. This way, we are avoiding making the mistakes in the beginning in the first place. And this avoids unnecessary work.

    So this is the Behavior-Driven and Test-Driven Development approach in Model-Based Design. And if you want to know more about this topic, there's also a white paper with the same title, "Agile Behavior-Driven and Test-Driven Development with Model-Based Design," written by my colleague Hugo de Kock and Jim Ross. So, have a look there.

    It's kind of very similar to the presentation we have seen here, so you might recognize the different pictures and slides. And yeah, check it out if you want to know more about this approach. Thank you.

    [APPLAUSE]