Formalizing Requirements and Generating Requirements-Based Test Cases
Systems engineers typically capture requirements using text that can be incomplete and inconsistent, resulting in errors that become exponentially more expensive to fix over time. By formalizing requirements using the Requirements Table block, you can define the expected behavior and analyze the requirements for completeness and consistency before you even begin your design. Furthermore, since the model is independent of the design model, you can generate requirements-based tests from your modeled requirements and verify if your design is meeting those requirements without needing to manually write thousands of test cases.
Published: 3 May 2023
Hello, and welcome to the 2023 MATLAB EXPO. My name is Dalton L'Heureux and I am one of the application engineers here at MathWorks. I specifically focus on our model based systems engineering and model based design solutions, and in particularly when we start talking about safety critical software development and verification, that is where I most commonly come into play. And today we're going to be discussing a topic or new capabilities that are very, very closely related to safety critical software development requirements, traceability verification, that sort of thing.
And the topic for today's discussion or this presentation is going to be on formalizing requirements or modeling requirements and how to generate test cases from those requirements. And not just structural based test cases, but actually requirements based test cases. All right? So without wasting any time with the hashtag social media stuff that I'm sure every other presenter has been bombarding you with, we're just going to dive right into it. All right?
So let's start from the very top, right? Why model based design? And I'm going to cut out the word design there and say, why model based anything? Why do developers decide to use models in their development process? And the reason is pretty straightforward and simple and most of you who do this probably already get it, right? It's really to save money. There's nothing else to it, right?
Now, how we save money is the interesting thing, but the essence of modeling your design, your system architecture, your requirements always comes down to saving money. What is the value of a model? So here I have a plot that is showing the relative cost of fixing an error based on which development phase you're currently in.
So you can see that if you wait until your product or your final product is deployed into the field and you find an error in your system, it's really real expensive to now fix that error right because you've got to go back to your development phases if you find a problem with your requirements. So got to go all the way back to your requirements, fix your requirement, fix the design, regenerate code, update your test cases, and redeploy. So that iterative work or regression work ends up being very, very costly.
So the point of model based design-- well. Sorry. It's my little cue there, right? So the point of model based design is to basically shift where you traditionally find these errors in your system, which is during the testing phase in a traditional V&V process to earlier on in development during your design phase, where you now have a model of your system where your design that you can simulate. The key word there is simulate. You get the most value from model based design doing simulation and early verification activities.
To look at this a different way, we can take a look at this plot from some of our aerospace customers who have adopted model based design and where they found the most value from model based design within their development process. And you can see here that there is some value in the coding phase by using model based design, and there's this common misconception though that model based design, the bulk of the value comes from automatic generation.
And you can see here in this plot that that is really not the case. There is value there, but the bulk of the value is coming from activities done in the requirements development phase and in the testing phase, and these are all verification related activities. The big red bar in the requirements comes from having a design model that we can simulate and verify that our design is satisfying our requirements or our requirements or specifying the correct thing, and the big red bar in the testing phase comes from the fact that now we have a set of test cases that we use to test the model.
Let's just reuse those test cases to test the generated code. Test case reuse. It's all about verification. It's all about simulation. So we're not here though today to talk about really model based design in the traditional sense, right? We're here to talk about what's new. So how can we shift this error detection even earlier, and then what are some other benefits that we can get from modeling our requirements. And I just kind of answered my first question with my second statement there.
So what's going to happen here is if we want to detect errors even earlier and hence save us even more money, we need to start detecting errors in the requirements phase and not in the design phase, and we can do that by formalizing or modeling our requirements. And by modeling our requirements we now have a computer or machine readable representation of those requirements that can be analyzed and we can check for things like completeness and consistency.
Do our requirements conflict with one another? Are there any kind of issues in that sense with our requirements that we can find now before we even come up with our design model? OK? Now there's another hidden benefit here that doesn't necessarily relate to error detection, but the active writing test cases themselves is actually super, super expensive and is often overlooked. Maybe not overlooked, but it's just kind of ignored. You know you're going to have to write test cases, so that's that.
But if we model our requirements and have a formalized requirement model that we can use a machine to analyze and run analyses on, why can't we use that formalized requirement to then generate test cases just like we generate code from our design models? So that's what we're going to be looking at today. All right. So let's go ahead and jump right into it, and I'm going to pause here for a second and jump over to the tools.
All right. So I'm back here in the tools. We're going to start off looking at our requirements for an example. So here we're looking at traditional textual requirements that you would use on any kind of system here. We're looking at them in our tool suite. So this is the Requirements Toolbox, this is Requirements Editor. And you can see that for my system here I've got a series of requirements sets as well as, within those sets, individual requirements.
And if I click on one of these requirements we can get a little description. We can see the links, establishing traceability from these requirements out to our design artifacts and to our test cases, which I don't have linked here, but they would show up here. And what we're doing here in this system or what these requirements are actually describing is different operational modes of a quadcopter model that we have that basically transitions us between these different modes so that our quadcopter can successfully track and follow a green ball around a room. All right?
So that's really what these requirements are. They describe a state machine, really. And let's actually go look at the design model just to get a feel for what that looks like, and then we'll come back and we'll look at what these requirements look like as a formalized model. All right? So I'm going to navigate to our design model by using a link to show you how that works. So I'm going to click on one of these links and it's going to bring me to the model element that implements that particular requirement.
And you can see, just like I said, it's a state machine. So this state machine controls the different operational modes of our quadcopter. Again, our goal for the quadcopter is for it to track and follow a green ball around the room. So we have a couple different states here or operational modes, such as calibration, ready for takeoff, track 3D. So what should the quadcopter be doing when it's actually tracking the ball? What happens when we lose the ball? There's a mode for what happen-- the lost ball scenario.
So there's all these different modes, and those requirements just define how these different modes get transitioned between. If I click on a requirement, we can see the states and the transitions associated with that particular requirement. All right? Now, I don't have test cases linked here, which is why my verified bar is white for all of these requirements. If I linked test cases to them it would turn yellow until I run them where it would then turn green and red.
But the idea here is that I don't want to actually write test cases, which is why it's blank right now. We're going to actually model those requirements that we just looked at briefly in a formalized way and then generate the test cases that we're going to use to test these requirements. Or I should say test the requirements that are modeled, that version of the requirements. All right? So if I go to now our new block, our new requirements table block, this is the method that we're going to use to actually formalize those requirements.
So, again, we're transitioning through a state machine. So if I go inside this block here, you'll see that I have a couple different rows or I have a lot of different rows. Each row basically represents a requirement, and it defines how one transition shall be taken from another transition, just like how our textual requirements are architected. So just for example, I have a requirement here that says, essentially if my previous mode or my current mode is initialize.
If I'm in the initialization state and I receive a calibration command, then my post condition or my expected behavior is that my machine or my state machine transitions to the calibration state. So that's how you read this. You would basically author your requirements in a formalized manner. Now once it's in this tabular format, you can then do things like I was mentioning earlier, check for completeness and consistency and then generate test cases. All right?
So the first thing we're going to do is actually just check for completeness and consistency. We don't want requirements that are inconsistent with one another. We don't want to have some situation to occur where we don't know what the system should do because our requirements are incomplete, that sort of thing. So in order to run that sort of analyses, all we have to do is come into this block and hit this analyze table button.
And it does take a couple minutes to run, so I already pre-ran this for you and that's what you're seeing here is actually the results of that analysis. So if I hit Analyze, I'm going to get a report here that gives me the results of this analysis, and then I'm going to get some highlighting in the table to point to where these issues were found. And this particular example has an issue of both an inconsistent result as well as an incomplete result.
So you can see we've got an inconsistent issue, we've got an incomplete issue, and then the tables you're seeing here are basically the input scenarios or the stimulus to these requirements that ended up causing the analysis defined an inconsistent issue where it didn't know which path to take and also find an incomplete issue where it didn't have a path to take based on the inputs of the system. So I'm not going to get into too much detail about how to fix this, but I want to show that it's here. The interesting about this talk is going to be test generation, not this piece.
But if I look at these lines, I'll just give you a feel for how this works. I basically have an inconsistent result getting flagged here, because if you think about it, if I'm in track altitude, like my current mode is track altitude and I lose the ball-- I've lost sight of the ball-- then my requirement says that I should transition to the lost ball state.
Now there's another requirement that says if I'm in track altitude and I receive this mission mode command or if my battery voltage drops below a certain voltage, then I should transition to LAN. So these are two different requirements starting in the same mode, ending in different modes based on a different transition criteria. Now, it is possible for me to lose sight of the ball at the same time that I receive a mission mode command of two, or at the same time that my battery voltage drops below three.
And if that's the case, well, which one of these do I take? That's what this inconsistent requirement is flagging. This table doesn't know, right? We haven't specified a priority order here, which is something that often gets overlooked in actual textual requirements. What is the actual order of operation you want these things to occur in? So this is finding that sort of error for you up front before you even have the design model. OK.
That's the inconsistency example. The incompleteness one looks very, very similar. It has some set of requirements or inputs that stimulate a requirement, or actually that doesn't stimulate any requirement in this situation, and then the table doesn't know what to do. That's essentially because we didn't specify default requirement that if none of these requirements are valid then you should stay in the same state.
You as the engineer know a state machine, that's how it's going to behave. If none of the transitions out of a state are valid, I should stay in that state. But this requirements table and your requirements don't know that, so you need a requirement or a row in this table specifying that if none of these requirements are satisfied or triggered, then stay in the state that you're currently in. All right. Moving on to the more interesting stuff because we want to talk about test case generation.
So instead of hitting this analysis button, what we're going to do is we're going to go to Apps. We're going to go to Design Verifier and we're going to select the Test Generation mode. And then we're going to hit Generate Test, and that's it. And again, I'm not going to run this because it takes a little while. But what it's going to do is it's going to kick off Design Verifier's analysis and it's going to try to generate test cases to test each and every one of these rows and all the conditions and MTBC objectives within each one of these precondition, postcondition constructs. All right?
And since this is a requirements model, I'm going to claim that that is a true requirements based test. I'm generating a test case off of my requirements. All right? So when I do that, what I'm going to see is I'm going to get basically a test file and it's going to get dumped into Simulink Test. So if you're not familiar with Simulink Test, this is Simulink Test.
When I generate tests using Simulink Design Verifier, I'm going to get a slvbdata.bat file that contains all the test cases needed to test the requirements in that requirement table. So you can see it generated quite a few test cases. All right? Then I can configure this test case to actually run those test cases, not against the requirements model, but against my design model.
So here I have my design model specified, and I'm going to use a harness-- we'll look at that in a second-- as a wrapper, basically. So I can define my pass/fail criteria for these test cases within that harness as well. So actually, let me go ahead and pop up the harness here. This is what my harness looks like, and you can see I've got my inputs to my system. These are the same sort of inputs that go into my design model. Here's my design model or my unit under test, and then here is this observer block that I have.
This is going to contain my pass/fail criteria. And my pass/fail criteria, what is it? Well, it's simply my requirements table block that we were just looking at. It is my model requirements. So I'm going to actually use my requirements-- as soon as this thing opens up-- to verify that the output of those test cases from the design model is actually satisfying these requirements that I've modeled. OK?
And by doing that, we should get true requirements based testing of our design model using test cases that were generated from the requirements model. Was that a lot? Everyone still following? All right. So I'm going to go back to the Simulink Test Manager now and we're going to pretend we run these things. Hit Run, and what's going to happen is we're going to get a set of test results here in the Results and Artifacts tab.
And these are going to be structured with all the different test cases that we have. So every one of these iterations is a different test case that got generated, essentially. And you can see that each one is marked with a pass/fail, or there is also a do not care option in here somewhere but it's not showing up because I don't have an example. So it's earmarked as pass or fail, and then we can expand these and see basically the pass/fail criteria.
So these are the verify statements, which is analogous to the row or the requirement that we modeled in the requirements table. So during this particular iteration or this particular test case, you can see that requirement one was triggered. So input stimulus triggered that requirements precondition to be evaluated, and the postcondition behaved as expected. So it passed. Right?
You can see requirement 2.1 is marked as this didn't test or untested, which means that the precondition to trigger that requirement to be tested was never actually hit. So this iteration or this test case wasn't actually testing that requirement, so it didn't bother checking the postcondition. We don't care about that. And then lastly, we have one that failed. So 2.13 requirement ended up failing, which means we did stimulate our table in such a way that we triggered that requirement to become active.
We checked the postcondition or the expected behavior and it wasn't what we had specified. So our design model did not actually do what the requirements model said it should do so it marked as a failure. So now it's up to you to go through and figure out what the heck is going on, correct your requirement, your design model, your requirements model. Whatever it is, you need to sort it out. That's your job as the engineer.
So the nice thing about this is you're in the Simulink MATLAB state flow world. You get all sorts of debugging capabilities to help you work through this. We can see what inputs get dumped into the model. We can log signals or outputs of our design model to see what's going on. There's all sorts of things you can do to try to debug the system. So once you do that and you get all these things passing, you've probably noticed up here that we're getting more than just pass/fail metrics when we're running these test cases.
We're actually getting coverage metrics as well. So the testing metrics or the pass/fail metrics are great and important and needed. You need that to show that you have-- your model is being fully tested in the sense that all your requirements are being tested and that you're actually satisfying those requirements, but you also need to make sure that you're actually fully testing all the functionality within your model.
And since you have two models now, you want to make sure you're fully testing all the functionality within the design model and the requirements model, because 100% requirements model coverage to me means that I've fully tested that requirement. So you can see here, we're actually getting both of those metrics for each test case individually, as well as the aggregated result.
So here in my aggregated coverage results section. I'm going to go click on the top node up here and we'll get the full result for the whole test suite. So here's my design model. We're missing some objectives. Probably missed some test cases that we might have to fill in manually. Maybe there's something extra in the model that some developer put in that shouldn't be there based on the requirements and we should remove it, so we need to figure out what's going on there.
And then we can also see that we have not quite 100% on the requirements model as well, which means that we did do a pretty good job generating requirements based test cases from our requirements model, but there is something that we missed and we'll have to fill in. All right. And with that, that's it. Much simpler than actually writing the test cases manually, right? So I know this was fast. I blew through it pretty quick, but they only gave me 20 minutes so it is what it is.
But if you want to learn more, please reach out to us. These slides have some additional resources on the back end to help you get started as well as the ones that I just flipped through that may be of interest to you. So take a look at the slides. take a look at the resources, reach out to us, and thank you very much. I hope you guys enjoyed the presentation and I hope you enjoy the rest of the 2023 MATLAB EXPO. Take care.