Using Model-Based Design for DO-178C and DO-331 Compliance - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 38:51
Loaded: 0.43%
Stream Type LIVE
Remaining Time 38:51
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 38:51

    Using Model-Based Design for DO-178C and DO-331 Compliance

    Overview

    The need for advanced civil and defence systems has been brought into sharp focus. In civil, the drive for zero climate impact is driving development of advanced propulsion systems and autonomy. At the same time, new mobility models, like Urban Air Mobility, are emerging. In defence, geopolitical uncertainty has highlighted the necessity for Next-Generation physical and digital systems, and the need to react quickly.

    In both worlds, complying with Certification standards is still one of the most time-consuming activities.

    Learn how to use Model-Based Design to show compliance with DO-178C and DO-331, while using qualified tools to streamline the process.

    Highlights

    • Introduction to DO-178C and DO-331
    • Best practices to consider
    • Model-Based Design lifecycle
    • Tool Qualification

    About the Presenters

    Bill Potter is a Technical Marketing Manager for Aerospace Certification Applications at The MathWorks and has been in this position since 2006. Prior to joining The MathWorks, Bill worked for 28 years in the aerospace industry, including autopilot hardware development at Rockwell Collins; control system design and software engineering for digital flight control systems at Honeywell Aerospace. Bill was a member of RTCA Special Committee 205/Eurocae Working Group 71, which developed revision C for DO-178. Bill is currently a member of SAE G-34, Artificial Intelligence in Aviation.

    Mr. Vance Hilderman is a 35-year avionics safety-critical engineering expert.  Holding a BSEE and MBA from Gonzaga University, and a Masters in Computer Engineering from USC (Hughes Fellow), Mr. Hilderman has focused on safety-critical aviation and avionics software, safety, systems, hardware development and related technical certification solutions for his entire career. At AFuzion, he created the DO-178C, DO-254, and ARP4754A product line including Templates & Checklists, Gap Analysis, Training and Mentoring protocols. Today AFuzion is the world’s largest dedicated aviation certification services company with 60 engineers and 400 of the world’s largest aviation companies as clients plus government agencies including FAA, EASA, NGA, Transport Canada, MoD, INTA, Bundeswehr, RAAF, RNZAF, and hundreds more.

    Recorded: 3 May 2023

    Hello, everyone. This is Bill Potter and welcome to the webinar on model-based design for DO-178C. Today, we're going to be discussing how to develop, certify systems using both model-based design and formal methods.

    Let me introduce myself. I'm Bill Potter. I'm a Product Marketing Manager at MathWorks and I have responsibility for the DO qualification kit, which allows customers to qualify our tools. I was a participant on SC205 that developed DO-178C and its supplements. I'm currently also a participant on SAE SG34, which is looking into how to certify AI systems on aircraft.

    So let me hand it over to Vance Hilderman and let him introduce himself. Go ahead, Vance.

    Good morning, good afternoon, and good evening, everyone. I've got my video on here so you can see me from Los Angeles. It looks like we have 900 registrants. That's a good turnout.

    Modeling is really important. Thanks for tuning out this morning, afternoon, evening. As Bill said, and he's an old friend and colleague, I'm the Chief Technical Officer of AFuzion, and we have a fast-paced model-based development webinar for you here.

    Let's go ahead and turn the page, as we say. On the next slide, we'll be talking about a workflow poll question. So folks, we will have time for Q&A at the end of this webinar in about 40 minutes.

    So start thinking. Where are you at on your current project? It might be automotive, aviation, safety, critical, industrial. Are you starting? Are you getting ready to start? Are you planning?

    Maybe you don't have immediate plans yet. But I promise you, once you see the power of modeling, you certainly will. And absolutely, item D there, of course you're curious. That's why you've tuned in. Let's look at the next slide while you ponder more questions.

    And real quick, many of you probably have heard of AFuzion. We're the world's largest dedicated aviation certification services company. And about 75% of the world's largest-- these days 500 aircraft and systems companies use us. If you're in the military, if you fly rotor, fixed wing, almost certainly our software system safety solutions are in your product.

    We have templates, checklists, do a lot of gap analysis. We train about 5,000 people over a given year, and it's all in the DO-178, 254, 278 ground ARP. We'll talk about that. Let's go ahead and look at the next page and dive right on into the technology here.

    Now, slide 4 here is really important. If you look at the top, that's where it all starts. It's that safety assessment, ARP. And that is not American Recommended Practice. That is worldwide, Aerospace Recommended Practice. But it's replicated in automotive, industrial. It's where we assess what's the impact, how safe does it need to be, how much redundancy, what should the design be. And Bill is going to be talking about design models, spec models in just a few minutes.

    And then what's the critical architecture? And then what are our safety requirements, reports, redundancy, switchover, alerts, air management. And then we build the system requirements. That's ARP-4754B. The new version these are just coming out in June, July of this year, 2023. That's the guidelines for developing the requirements.

    Then we have external advisories, and many other industries replicate this as well. Our focus today, as we see down at the bottom there on the right, is model-based development. And that's governed in aviation by DO-331. That was the special committee that Mr. Potter just mentioned to you.

    This is the ecosystem. It's not a restaurant menu where you simply pick one and maybe dessert. That's my favorite. No, no, this is a nutritious meal that you need all components of this ecosystem. And we have to show continuously that we use it, especially with modeling.

    On slide 5, the next slide, we'll take a look at the old versus new. Now, many of you might be like the guy on the left, buried in paperwork. Well, today, modeling, the tools that we have, including from MathWorks here, no need for that anymore.

    We embody within the tools, the model, the requirements, the interface definitions. We can define the architecture. We can iterate and animate, define the system functionality, all the external I/O. We can do trade-off analysis, create traceability, test specifications. And truly, with a push of a button, we can code.

    But you need a model first. Humans make the models. The tools are very powerful for us. The new approach truly is modeling. All of your schedules are getting faster, your budgets are tight, your managers want new product quickly. Modeling is the key to that,

    On slide 6, the next slide, we'll take a look at what we actually need. But remember the development approach. System engineers, perhaps many of you architects, we model before the code and then use that model to generate the code. So models need a formal language, mathematical closure, represents all the I/O. Simulink is a type of formal language that lets us do that.

    Now, think of models as to code as code was to the old days, when my hair wasn't gray. We wrote in a similar assembly language. We are getting further from the hardware. So a common practice, remember, in DO, in cars, medical devices, nuclear, you might be Einstein, but we don't want you to prove you are Einstein. You take refinements, steps.

    So first, we have either high-level requirements, then low in the old days. But today, with modeling, it's a spec model, specification model, the external I/O, the system functionality. We've refined, iterate that within the tool, Simulink. And then we create the design model, and Bill's going to show you how to do that. But it's a two-step approach with evidence.

    On the next slide, slide 7, you have some interesting considerations, many advantages to model-based development. Instead of different languages between system engineers, English, French, German, Italian, British, that foreign language, well, engineers using code, C, C++, we use a common language. This reduces the number one cause of errors, which is assumptions. So we can also animate, execute in a simulated environment the actual requirements. We have much better reusability and platform flexibility.

    And remember, in certification the costs, half of the costs are in certification. But if you don't change the plans, the requirements, the design, if it's truly reusable, all you do is retest, review the test-- those are automated-- recertify. So reusability is the key and modeling gives us that. So we can really respond quickly to requirements, changes, and again, that's the sound of automated code generation.

    Now, slide 8 has a couple of additional advantages. The modeling really lets us manage complex system development. We can separate concerns, domains, and develop incrementally. Have you heard of agile? Well, safe agile really does work.

    Better communication design quality. You've all seen projects with months or years of integration? Not necessary. Yeah. If you're building a railroad to meet in the middle, start in the middle and work out instead, less assumption. So you can have better risk reduction, verification, validation, requirements to correct. That means you got less changes, fast integration, and you can prototype, again using agile.

    Slide 9, the next one, gives us a little picture of what we need. Now, we love modeling, but remember, we don't always love the engineers that model. Who do we trust in safety critical? No one.

    Engineers make the model and that means we need modern tools. You got it. Bill's going to show you in 45 seconds here, I promise. Well, those tools need to follow a standard.

    So John and Mary, when they create their model, we cannot tell if John did it or Mary. They use a standard. We use that standard to evaluate the model.

    Models must have requirements. Did the model satisfy the requirements? Models are going to have configuration items. That makes them portable, reusable, actually useful. We're going to use element libraries and define the system interfaces, and all these have to be in writing, but the tools help us.

    We're going to use a configuration index. So through the life of that product-- I'm holding up my Apple iPhone here. Yeah, they change every 6 to 12 months, right? Well, can Apple replicate the new version from the old version? What's that configuration? Can we trace it? The modeling tools really help.

    And then finally, what's the environment that we build the model from, the versions, the tools, the libraries, a user manual so someone 30 years later-- that's right-- through the life of an aircraft, in that case, can replicate that. So DO-331 is a terrific read. It talks about safe modeling. It's applicable to automotive, space, industrial control.

    Let's go ahead now and look at an actual example from Mr. Potter. Bill, go ahead and take it over right here on this slide.

    All right. Thanks, Vance. I did want to put up we do have a second poll question, and this is basically asking you, how familiar are you with model-based design? Are you already using it? Are you looking at using it?

    Or maybe you haven't looked at all. But please answer the poll question in the background. Again, we'll take questions at the end of the presentation.

    What I want to do now is kind of show the DO-331 development stages. And in DO-331, table MB-A2 is the development activities that you go through. And starting on the left side here, you always start with system requirements. The system requirements get allocated to software in this case.

    Then you have high-level software requirements, which could be textual or could be a specification model. And I'm going to talk about those two different examples here. Specification models certainly aren't required, but they are possible.

    Then in the software design stage, this is where you have a design model. And this is the most common usage of Simulink, by the way, is for design models. From design models, you can automatically generate the source code. And then, of course, you compile that source code into your executable on your target and you have your running software. So this is the development lifecycle.

    It does look like a waterfall. But as Vance said, you could use agile or spiral. There's other different techniques, but you do have to deal with the transition criteria between these various stages.

    Now, on the verification side, if you look at the software, it has to be tested and you have to do some coverage analysis. So this is covered by tables A6 and A7 in DO-178 and DO-331. For your source code, you typically have to do reviews and analyses of your source code. That's table A5.

    For your design, which in this case is in models, you can use review, analysis, and simulation. Simulation is an activity that's enabled by model-based design, and you can get credit for simulation for some of your verification activities, thus reducing the amount of reviews and analysis you would have to do on models. And then for your software requirements, typically that's also a review and analysis activity. Could be slightly different based on whether those requirements are textual or done by specification models.

    So now let's look at the first tool here from MathWorks. We're going to be talking about the Requirements Toolbox and how you might use that to do textual requirements. So with our tools, we have the Requirements Toolbox, and this Toolbox has an editor in it and allows you to write and configure textual requirements. And then from these textual requirements, you can link them up to system requirements, you can link them down to models, you can link them down to test cases. So it gives you the ability to do your traceability that's required as well as authoring your requirements.

    Now, if you do these requirements textually, you typically have to review them for consistency, completeness, correctness. And so one of the things we do, we ship within our DO qualification kit some templates that have some review checklists that you can fill out and do a review. And then finally, you will archive that review checklist as part of your certification evidence. We can also generate a report in the form of a document of the requirements from the Requirements Toolbox. This is also something then that you can archive away as part of your design data and we qualify the generation of those reports.

    So an alternative to that is to use a requirements table block for a specification model. Again, this is an alternative. It's a way of using spec models instead of textual requirements for your high-level requirements. We have this Simulink block that comes with the Requirements Toolbox called a requirements table.

    And in that table, you can put in some preconditions, some actions. So basically you can define for this set of inputs, my system is supposed to take this action. So it's a specification telling you what you should do, not how you should do it.

    And we can actually generate a report from this specification model, again, a design description report, and that generation of that report is qualified. So you can have this specification in the form of a document that you can archive off. People can look at that without having, actually, a MATLAB or Simulink license.

    One of the advantages of having the specification in the form of a model is that you can do some analysis on it. So for example, with Simulink Design Verifier, we can find inconsistencies and incompleteness in the requirements automatically using a tool. So it can find things where maybe I have a missing requirement or maybe I have conflicts in my requirements. So we can automate that activity, which is an advantage over textual requirements.

    But even then, you still have to do some manual review because you're going to have to verify that your requirements are correct. The tool can't do that. So there's still going to be a review activity you do with a checklist and then you can archive that requirements review checklist as well.

    Now, moving on to the software design, this is where Simulink comes into play. I can create a model in Simulink, do my design in Simulink, and I can trace that Simulink model up to the requirements. And it doesn't matter whether the requirements are textual or a specification model. Either way, I can link up and trace my design back to my requirements using our tools.

    Also, we can generate a design description report from models that completely documents the model, again, in the form of a document, maybe a Word document or PDF that you can archive off your design. And again, that gives you the advantage. Someone can audit that design without needing a Simulink license.

    Now, in order to verify the traceability of the model, that it's correct, a human does that traceability. So you still have to do some manual review of the models, for example, the traceability review and the interfaces that are going to go to your other software components. Those are things that you cannot automate verification for, so there are some manual review activities. But as I stated earlier, we're going to use simulation and automated analysis to do a lot of the verification on the model. Again, you would archive your completed review checklists for the models to have that artifact for your certification evidence.

    Now I'm going to talk about two other tools Simulink Check and Simulink Design Verifier, for doing static analysis on the model. And basically, you're going to start again with your design model. We have a couple of apps here from the model, the first being Simulink Check, which has the Model Advisor functionality you can see here. This is a qualified tool and it checks conformance to standards.

    It automates this process. We ship modeling standards with Simulink and this tool automatically checks the model against those. So you don't need to manually review that the model conforms to the standards. That process is automated. And of course, you then archive off that standards report.

    Now, a second tool we have is Simulink Design Verifier. This checks accuracy and consistency. It actually uses formal methods. We did mention formal methods earlier. We do have formal methods tools at the model level and at the code level. This tool is qualified.

    This will find things like potential runtime errors. It could find dead logic in your model, things that would lead to unreachable code. It could find numeric overflows, out of bounds arrays, things like that. Again, this is a qualified tool. The design error report you get can be archived off as part of your cert evidence.

    Then we move on to the primary method for checking models, and that is using simulation. So for this activity, we have Simulink Test and we have Simulink Coverage. So you can author simulation procedures. Part of that authoring can be done in test harnesses. This happens to be a test harness for a model. It's using the test sequence block.

    You can also automate execution of the test cases and reporting using the Simulink Test Manager. This Test Manager works in conjunction with Test Harnesses to be able to verify models and generate reports from those. And also you can then review the test cases and procedures. Because these test cases and procedures are authored by the human, you still have to verify that those test procedures and cases are correct, so you use a review checklist for that. And that completed review checklist is used as your cert evidence.

    Now, when you run the simulation, it also records model coverage. And executing the test cases, checking the pass/fail, and producing the coverage results, those things are qualified. So we have qualified tools for doing that activity, and then that provides you a simulation results report and a model coverage report that you can archive off as your evidence that your simulation case is passed.

    Now, for source code, we've talked about this numerous times here, about being able to automatically generate the code. So embedded coders use-- they can generate C or C++ code. It's very well commented.

    It's traceable up to the design model. We have comments in the code that trace it up to the design model. It's very readable. It's customizable as far as interfaces and parameter names and things like that.

    So this comes directly from the model. This process is automated. So you don't have to write manual code. And of course, you end up with your source code, which goes into your source control system as your artifact for source code.

    Now, for verification of the code, we actually use Simulink Code Inspector and Polyspace Bug Finder for doing source code verification. So rather than having a qualified code generator, we actually use verification tools to verify the code for models. This activity covers compliance to the requirements, traceability, accuracy, consistency, verifiability. These are all things that Simulink Code Inspector is actually qualified for verification.

    So it does a comparison between the model and the generated code and verifies the correctness of that. And we get a coat inspection report out of that. That's your artifact that shows you comply with table A5.

    Then for conformance to, standards we have Polyspace Bug Finder. Again, this is a qualified tool. We support MISRA coding standards, and so the tool verifies the conformance to the MISRA coding standards.

    This is very much like the Model Advisor checks for the model. You have model standards. We automatically can check the conformance to those standards. And here we have generated code and we can automate conformance to standard checks here with Bug finder. And again, now you get a code standards report out of here.

    Then we have Polyspace Code Prover. Now, Code Prover is also a formal methods tool, just like Simulink Design Verifier, and it can do checks for robustness of the generated code. It can also do data flow analysis on the code. So this covers tables A5, A6, and A7.

    So for accuracy and consistency from the model, we generate the code. And we have the Polyspace app here that it will look at the generated code and it's qualified as a formal methods tool. And it can show that the object code is robust so you can actually eliminate some of your traditional robustness testing by using formal methods instead. And it also will generate a data coupling report, which will show any coupling issues between global variables, for example, that you pass through your software architecture. So it's able to do some data flow analysis as well as some robustness checks on the code.

    And again, you get two reports out of this. You'll get a runtime error report, so it could tell you, do I have divide by zeros, do I have overflows. If your code is good, you'll have green. If your code has a bug, it will turn out red. If you have dead code, it will come out gray.

    If it's orange, it means you may still have to run a test for that particular case. And the shared variables report will tell you things like if you have a read before write, if you have initialized variables, if you have any kind of conflicts between different processes in your software. So it's able to detect those type of errors as well.

    Then the next step that we're going to go to is testing the actual object code and doing structural coverage analysis on the source code. So Simulink Test and Simulink Coverage we saw previously could be used to execute simulation cases. Now we can take that same test harness that we ran simulation cases on, and in fact, the very same simulation cases, reuse them to test code.

    So here I show a test harness. The difference in this test harness than the one we saw for simulation is this model block here, instead of running the model and evaluating the model, it is now tied to a development board running the actual code on the board. So when you run this harness, you're actually communicating to a processor, stimulating inputs, getting outputs back, and you're actually testing your code in this case, instead of testing the model.

    And you can use the Test Manager to run this test in PIL mode. This does your table 86 verification of the executable. It also instruments the code for structural coverage, so you will get code coverage in conjunction with the actual pass/fail test results. Both these tools, Simulink Test, Simulink Coverage, are qualified for doing the pass/fail checking and for doing the coverage analysis on the source code. And so you will get out a test report as well as code coverage report from the tools that you can archive off.

    Now, one thing that PIL testing does not cover is your hardware/software integration testing. PIL mode is looking at more a component-level test, where you want to be able to have very fine grain control over your stimulating your test cases, being able to get all your structural coverage on your complex system. But in the end, you still have to do integration testing. And looking at your system, here I'm showing a system view of a controller model and a plant model.

    | in the end, you want to get out to a lab and you want to actually test your hardware unit with all your software running in real time. So you're going to generate code from your controller model. That may consist of many, many models, and a lot of code gets loaded on to an embedded target. And you're going to actually want to test that target with all its device drivers, its operating system, everything running in the lab.

    One of the other things we can do is Simulink and our code generation technology is if we have a plant model in Simulink, we can actually generate code for that plant model, load it onto some Speedgoat hardware, which also has hardware boards with bus interfaces, analog interfaces, any kind of I/O you'd like to configure, and we can run that plant model in a simulation environment in a lab. And then we connect these two together through wiring, signal conditioning, and we're running our real box in the lab with a plant model to be able to do a system-level or a hardware/software integration-level test.

    And the nice thing about this is we can actually connect Simulink Test to the Speedgoat hardware and actually record the test results and verify the pass/fail checking, just like we did for models, just like we did for PIL mode. Now, these tests will be slightly different than the test cases for PIL motor models because we're actually stimulating real hardware here. So it's at a higher abstraction level of test in this case, but we can still use the qualified tool to do this. And then we'll get a test results report, again, that you can archive off as your cert evidence.

    Now, finally, I want to talk a little bit about tool qualifications. DO-330 is the tool qualification supplement and we provide a qualification kit that allows you to qualify various tools. And you can see here all the tools I talked about, the requirements, report generator, Simulink Check, Design Verifier, Task Coverage, Code Inspector, Polyspace tools can all be qualified. All these tools are TQL 5, except for Polyspace Code Prover, which that tool is TQL 4 because it takes some extra credit for eliminating some robustness testing, therefore it gets qualified to a higher level.

    One other tool I didn't mention specifically on any slides was a Simulink Model Compare tool. So we do have a tool that can do model comparisons between two different versions in your CM system. We also qualify that differencing tool because models are fairly complex. It's not a simple text comparison, like you would have for documents or code. So we do qualify that tool as well as part of our tool qualification.

    What are the artifacts that come with the kit? Basically, the artifacts that come with the kit are those called out by DO-330. So we provide for each tool a tool qualification plan, this document, and then we provide some tool operational requirements document. Those are all required.

    We have test cases and expected results and procedures that you can run in your installed environment. One of the requirements for tool qual is you have to execute the tool qualification test in your installed environment. That's one of the mandatory things called out in DO-330. So we provide the test cases, procedures, and the automation to run those.

    And then for formal methods tools, such as Polyspace Code Prover here or Simulink Design Verifier, we also provide formal methods justification, which basically documents that the formal method is sound and you can justify its use. This is a requirement from DO-332, that you have to do this for tool qualification.

    Our next poll question, we'd like to know, do you need to comply with the security standard DO-326? Looking for input on this and see how many of our customers have to go through this. So now I'm going to hand it back to Vance to do the conclusion here. I think you're on mute, Vance.

    Terrific. Thanks, Bill.

    Sure.

    Still early here in Los Angeles. I needed that fourth cup of coffee. Right?

    Right.

    That was a terrific walk through a fast-paced topic, model-based development and tool qualification. So to conclude, let's remember that truly model-based development is real. So many aircraft, probably half of the projects, 30 in a given year that we work on at AFuzion, are using model-based development. That's really up from just a few years ago. The productivity, the efficiency, and most importantly, are you really making software and systems that are meant to be used one time? I mean, no, you're not.

    Look at this iPhone, more processing power than the entire Apollo moon program. Well, it's about reusability. Apple's not reinventing Samsung. Right? Huawei. They're not reinventing everything, you should not be either. Tools like Simulink really enable that compliant DO-331 modeling.

    And 331 is an aviation guideline, or standard as some say, but it's really the basis, the foundation for automotive, industrial control, medical, nuclear, trains, everywhere. And in aviation, thousands of aircraft are using model-based development today.

    Now, Bill mentioned a two-word phrase, cybersecurity. It's really important and we have a special guest I'd like to introduce just for a moment. We're going to be doing a webinar, so stand by. There's some really cool-- we cannot talk about them, but really cool releases coming up in the next couple of months, maybe while you're on summer vacation.

    Well, cybersecurity is one of those. So MathWorks and AFuzion, in September-- get your calendars ready. When you come back from summer break, in September, we have a special webinar we'll invite you to. It'll be free, like this one. It's going to discuss some new developments that are really important to you for cybersecurity.

    Aaron, are you with us this morning from the Middle East?

    Well, this is the East side of the pond.

    Tell us a little bit about cybersecurity, what you have planned for September, just the one-minute sneak preview, if you will.

    Yes. Well, thank you very much. That's fascinating. I won't take too much of your time. You need to do some Q&A and some more.

    Well, just a sneak peek at our upcoming webinar in September, as Vance introduced. We are going to make you aware to the new paradigm. Whereas the current paradigm does not take into account any sabotage, in fact, sabotage is excluded in the current paradigm by FAA and EASA. The new paradigm that is already explicitly mandatory in Europe and is mandatory without saying so in the United States is that anything can be tweaked, anything can be hacked. And the solid ground that we have come to know in safety critical system is now shaking.

    So we will take you there into this fascinating world of smoke and mirrors. But we will not just scare you, but also propose some solutions with our terrific partners from MathWorks and us, AFuzion. We'll show you how to cope with the cybersecurity scare and mostly how to certify, keep certifying yourselves under the new mandates of the DO-326 in the United States, ED-202 in Europe, new ecosystem that includes cybersecurity on top of LP-4754, DO-178, and all the other usual suspects.

    I hope this would be interesting enough for all of you to join us in September and bring all your friends along. Thank you very much, Bill and Vance. If you have any questions now, please put them forward. If not, just hold it until September.

    On the last slide, you saw Bill's email address, BP@MathWorks, my info at AFuzion. Just ask us those questions. There's going to be many thousand more people watching the recording. And in a few days, next week, you'll all get a link to the recording from MathWorks.

    So it's been a pleasure of having all here. And Bill, great seeing you again.

    Yep. Thanks a lot, Vance. Thanks a lot, everyone, for watching the webinar.

    View more related videos