Introduction to Model-Based Design for DO-178C - MATLAB & Simulink
Video length is 1:03:14

Introduction to Model-Based Design for DO-178C

Overview

In this first webinar in the series, you learn about Model-Based Design workflow for DO-178C.

MathWorks team will take you through an example of Model-based Design software development workflow, including a high-level discussion of each step in the process and the corresponding tools involved.

Highlights

  • Qualified software development and verification processes
  • Introduction to tools qualification.
  • Model Verification
  • Code Generation
  • Formal Analysis and Test Generation
  • Software Testing

About the Presenters

Gaurav Dubey, Principal Application Engineering, MathWorks

Gaurav Dubey is a Principal Engineer in Application engineering team in MathWorks India and specializes in the fields of model-based system engineering, model-based development workflows, automatic code generation, verification and validation, and certifications. He closely interacts with customers in different domains to help them use MathWorks products for model-based development and model-based system engineering. Gaurav brings more than 17 years of experience in embedded system development. Prior to joining MathWorks, Gaurav worked with Tata Motors Limited, where he gained hands-on experience in engine management system ECU development. He has also worked as a software analyst at Tata Consultancy Services on different automotive projects involving model-based development. Gaurav holds a master’s degree in instrumentation engineering, and a master’s degree in electronics and communications

Satish Thokala, Industry Marketing Manager, MathWorks

Satish Thokala is Aerospace and Defense Industry manager at MathWorks. He has ~19 years of experience in teaching, public and private aerospace establishments including Hindustan Aeronautics Limited and Rockwell Collins. In his current role, he is responsible to analyze technology adoption in the Aerodef industry, and develop strategies to increase the adoption of Model-Based Design with MATLAB® and Simulink®. His area of expertise is Avionics systems for both military and civil aircrafts. Early in his career, Satish contributed to the design and development of communication radios and field trials of the same for Jaguar and MIG fighters. He led large engineering groups developing software for cockpit displays, engine control and participated in the DO-178 certification audits.

Recorded: 16 Sep 2021

Good morning, everyone and welcome to the first session of model-based design for DO-178C software development webinar series. I am Gaurav Dubey. I'm working as a principal application engineer, joined by Mr. Satish Thokala, who is an aerospace industry manager. We both work in the MathWorks lab based in the Bangalore office.

In today's session of this webinar series, we are going to see what all standards are there in the aerospace market, and how model-based design process can work for the DO-178C software development. Before we start, let's go through some of the skipping of the specific publication. So if during the session, you will be having a lot of the questions about the work and the message that we are delivering, please post your questions in the Q&A window. We be thinking of these questions either during the session or after the presentation is done.

If your questions are confidential, you can direct it directly to the host, instead of making it a public question. Throughout the session we'll be having multiple poll questions. So keep an eye on the poll questions towards the right side of your methods window. Please do answer that, if we had first take of the studio, that during the session, please post your questions in the Q&A window. Look out for the poll questions, which will be appearing on the right hand side of your Webex window.

And do answer that. It will help us in understanding your process and the workflow that you are using. And there will be a few of the useful links that will be pasted in the chat window, so keep an eye on that also. Now, before we start, I have a very quick question to Satish. So, Satish, DO-178C is a huge topic itself, right? So could you please throw some light at how are we-- so this cannot be covered in one hour, for sure. So how are we going to cover this complete DO-178C model-based design workflow throughout our web series.

That's right, Gaurav. Thank you for asking that question. So, as you rightly pointed out, this is a huge topic, right? So we can spend hours and days talking about DO-178C, and then how to use model-based design to achieve certification for DO-178C, or even, for that matter, any other standard, could it be 254 or ARP. The goal of this particular webinar series, that's why, you know, we have planned to cover it through a series consisting of four sessions, so in the series, as you are seeing on the screen right now, we'll start with the intro, as we are doing today.

And then we'll go towards how to verify and validate, as per DO-178C. And then we have one dedicated session to talk about how to generate production quality code, while we assure the code quality, using the tools that are offered from MathWorks. And then in our last session, we are going to focus more on the tool qualification aspects, right? So the document says it's important, right? So then how to achieve that tool qualification, right? So using the qualification kit that MathWorks offers.

So that way the agenda is planned into four sessions, right? And you could actually see this is spread over two months, starting first session today, and the last session we have on 10th of November, right? So please plan to register for the respective sessions, those who are interested. So what we are going to do in this series, specifically, is across all these four sessions, so we will show multiple examples, starting from textual requirements, and through the development process, and all the way through verification of the object code and the target processor, right?

So in respective sessions, we will try to bring in relevant examples. So that's how it's planned, Gaurav, for this series.

Thank you, Satish. It looks quite promising. So before we jump directly to the DO-178C workflow, right? It would be great if you can explain a little bit what all the standards, related standards are there in the aerospace industry, and how the model-based design is being adopted by the different industry leaders throughout the globe.

Great point, Gaurav. So DO-178 and DO-254, at least, you know, looking into my experience, these are the standards that I started hearing, since I started my career in this industry, pretty close to, you know, a little closer to two decades, right, by now. So these are the two standards, you know, I was hearing a lot. So, basically, these two standards, they provide guidelines for software and hardware item design. But then, slowly, I also started learning about another important standard, ARP-4754, which basically addresses the system engineering aspects of aircraft certification, including system requirements, the requirement validation, system design, and system verification.

So what I have seen is, you know, looking back into my experience, this is actually made more difficult, you know, because different approaches are often used. So after I joined MathWorks, you know, started working with customers, who describe their architecture using Vizio, Excel, or any other system math tool, while Simulink is often used for model-based design work, right, so towards the right side of the V-cycle. The challenge here is how to bridge the gap between the system engineering that you are doing versus the detailed design that you are doing with Simulink.

So that's where we worked with specific customers worldwide. And then we help them with custom integration, and the blue code. Part of the reason why this actually looks a little bit complex is because different modeling approaches are used to achieve the certification, and even, in fact, to do the design. And another issue is related to XML interchange format, defined in system level. And it has some issues in terms of how it is defined, how it is implemented by various system level tools.

Now we can actually support, if anybody who is hearing to this webinar, if you are specifically interested in ARP, we can support you with our system composer. But the focus for this particular webinar is on DO-178C. So another thing I had seen, Gaurav, you know, is satisfying these objectives of any of these standards, could it be ARP, could it be 178C or 254, it is actually time consuming. And it is also expensive, because it requires rigorous and well-documented activities throughout the process, starting from requirements, architecture, development, verification, certification, including tool qualification. Right?

So this is the high level view of different standards in this area. But let me continue to build on top of this, and let me throw some more light, and specifically looking into DO-178C, right, which is the main topic of this webinar. As you may already know, DO-178C is a new standard, right, so which has replaced DO-178B, which was in operations for, I would say, you know, over 20 years. The purpose of the new standard is to give guidelines for the production of safe avionics software.

When I say safe avionics, what I exactly mean is that our software should perform all the intended functionality and only that functionality, right? So what is new in DO-178C? Based on my understanding, based on my experience, I see that 178C standard had to fill two fundamental gaps in DO-178B. The first one is clarifying what is possible, in safe use of some technologies that have become established after the release of 178B. One of those technologies is that model-based design.

Even those who developed software according to the old DO-178B standard, used models. But the use of models was not really explicitly foreseen by the standard, 178B standard. So there were long negotiations with the certification bodies, right? The DO-178C standard makes it very clear that you can use models. You can use modeling, simulation, and automatic code generation. This is point number one.

The point number two, the second new point of DO-178C, is how it is connected to the system engineering standard of ARP-4754A, right? Which means our software is basically an object that lives in a much larger system, which means, you know, an example that the aircraft here, right? Now, how to implement new technologies that have emerged during the existence of 178B. right? And which is actually captured with four supplements, for DO-178C.

So as you see the four supplements on your screen, DO-330, which is for tool qualification considerations, DO-331, where we are going to spend a good amount of time in today's webinar, and the next three parts of the series. That's on DO-331, specifically for model-based design, right? So how to do development and verification for DO-178C. DO-332, object oriented technology, and DO-triple 3, is talking about the use of formal methods as a supplement to 178C, right?

So at this point of time, I'll make my request to you to bring one poll question onto the screen. Poll question 1, OK, hope everybody can see the poll question on your screen. Please respond to that. If you are working towards more than one standard, please do select multiple options.

I would like to give a few more seconds for everybody to respond. Maybe we can close 5 seconds from now. Can we close it now?

Yes, Satish, I think we can continue. It's done. So, as you can probably assume that a lot of the people are not working only on 178C. There is a large set of people working on the other standards, like DO-254 and ARP4754, focusing on that. But at the same time, we have observed that, in an industry, there are a lot of the domestic and in-house standards being used.

OK.

Which are driven from these well-established standards, but is still very much customized for the in-house uses.

Exactly.

Yeah.

Yeah, good, good, that's aligned with what we were expecting. That's good. Thank you, for everyone, for your response. So of course the first three are well-recognized, globally known standards. At the same time, in the defense world, every country has their own standard, either developed in-house by the defense agency, or which is specific to that particular country. So, in most of the cases, that could be a subset of any of the above three standards.

So if you are working on in-house standard, or kind of like domestically developed standard, your implementing that kind of standard. We can support you. We can actually help you to show how model-based design is mapped to your specific standard, assuming it's some kind of subset to one of the universal standards, right? So right now, what you are seeing on the screen is one of that example, you know, mapping exercise that we have done with NASA and ESA.

Though these two examples specifically coming from space industry, in the sense, fortunately or unfortunately, you know, in the space industry, we don't have any globally recognized one single standard. Every country and agencies, they have their own standards, right? And now that standardization is evolving in space, as agencies are talking more about space collaboration, right?

But right now, at this point of time, we have done this mapping work, you know, with NASA and ESA, to show how model-based design is aligned with the in-house standards that they have developed. So now let's start talking about why model-based design, for these kind of certification activities.

So if we try to apply the process of DO-178C to the V-cycle, we can quickly see actually model-based design, it fits perfectly to address various challenges faced in the life-cycle, right? So, for example, starting with capturing the requirements, and then extending that work towards architectural drawings, and then go for the detailed modeling and simulation, and automatic code generation, which is production quality code, and then finally moving towards verification, validation, and achieving the certification.

So what are a few highlights in this workflow, that we can actually see here, right? So this actually enables better stakeholder collaboration among various subsystem teams, and the systems team, and high quality to meet the standards, any of the standards that we're talking about. And we can actually use these models to communicate with all the stakeholders, right? And also we can actually go for varying levels of abstraction in the modeling.

And, of course, as I already mentioned, all these workflows are well-connected, and there is a lot of scope to automate our workflows here. Model-based design is not new in the industry, you know. Most of you must be already using this. And worldwide aerospace and defense organizations use this approach. And a lot of aircraft are flying which are certified using model-based design, right? So another beauty of model-based design is you can add up this as a piecemeal, or you can add up this as a wholesale for your entire program.

But typically, what I had seen in my experience is adoption of MBD follows a phased approach, to add capability while we minimize the risk to our production programs. So here actually you can see a couple of customers who are successful in adopting MBD. Again, this is not exhaustive list, these are a few samples, that is actually spread across the V-cycle. Now, what I would like to know more from you at this point of time is exactly which piece of the workflow is your area of interest, right?

Accordingly, the challenges can be different, right? The challenges could be different based on your role, right? Which part of the V-cycle that you are contributing to? So you are seeing second question on your screen, a poll question. Again, feel free, again, here also feel free to select multiple options, more than one.

Let's give a few more seconds. OK, Gaurav, how do we see the response?

So again, Satish, it is spread across all the options. The people are working on the requirement management. A lot of the people are working actually on the requirement management, while there are set of people working on the development, majorly the verification of the model, code generation, code V&V as well as a few people on that tool qualification also, audit and the tool qualification.

OK. OK, very good, very good. So thanks, everyone, for the response. So while it is important to understand challenges associated with each phase of the cycle, right, again this is from all perspective, right? So, in my experience, or looking into Gaurav's experience, we were playing, as like many of you, right?

So we were playing different roles in our career in aerospace and defense industry, right? So the challenges are different, based on our role. For example, if I'm a pure, if I'm doing detailed designs, or if I'm doing the models, so the main worry for me is how do I know specifications for my design, right? And how do I know what our detailed design that I'm doing is as per the system requirements that I'm getting from systems team, right?

So and similarly, you know, a few engineers could be looking for a different thing. Certification team would be looking for a different thing, right? So what sorts of validation are V&V kind of engineer, right? So what we are going to do now is, so while I'm going to pass it on to Gaurav now, to walk us through the rest of the presentation here.

So based on your role, based on your area of interest, try to put questions in the Q&A window. We will try to answer as many as we can today. Gaurav, over to you.

Thank you Satish, please do confirm once you see my screen.

Yes, I can see it, Gaurav.

OK. Thanks, Satish, for giving an introduction about all the related standards along with the DO-178C and how it's being used in the industry. So as Satish mentioned about the pin points or the key concerns of the different engineers, having the different personas, in the development cycle, let us see how we can use the model-based design to address these challenges or the concerns for the engineers, or the managers, throughout the process are having.

So as Satish did mention, you know, again, I will start with the ARP-4754, because that is where this starts. So, as we can see, the ARP-4754 talks more at the system level, system level requirement identification, item requirement identification. And once you do a lot of the hazard analysis, fault analysis, root cause analysis at sector at the ARP-4754 level, you refine the requirement and then pass it to the item design, which can be either in the software or in the hardware, whether the C code or FPGA code.

So our focus in today's session is on that part, where from ARP-4754, we get a refined requirement, and now we are going to develop software for that particular item. And that is where the DO-178C workflow starts with. So going forward, we are going to see today, as Satish mentioned, is that how we can do the software development and the verification according to DO-331, which is specifically talking about model-based design, the supplement of DO-178C, as well as DO-333, which is talking about the formal method based analysis on a work site.

So at very high level, this is how any development workflow looks like, right? We start with requirement. Then we develop a few models for those algorithms. We ensure that the model is correct. We do the model verification completely. From that model we develop the source code. We have to make sure that this source code development process is qualified process.

From that source code, we develop the object code, do a lot of the testing at that level, and, in short, ensure that the final object code is working as mentioned in the requirement. We'll go through each and every step of the cycle, and see how the model-based design can be used for that. So let's focus on the first part, where we start with the high level requirement, and then go to the low level requirement.

So when I say high level requirements, these are nothing but your textual requirements. And when you say low level requirements, in the DO terminology, what we are talking about are the models, the Simulink models, that you develop to represent your requirements, or to design your requirements, which can go further for the testing and the verification process. So once we develop the model, in this world cycle, first step is we have to validate the requirements.

A lot of the people, the system engineers and the domain experts involved in validating the textual requirement, but after the requirements are validated, we develop the model. We have to ensure that the model is traceable to the requirement, or the low level requirement are traceable to the high level requirement. This is a requirement from the DO-178C, right? So we should be able to move from model to the code, model to the requirement, and requirement to the model.

Then we have to make sure that our model is conforming to the modeling guidelines. Now, this can be the industry specific guidelines, or this can be your own in-house modeling guidelines, that we have to follow, some do's and don'ts when you are developing a model, and the very important part of this overall cycle is to ensure that our models are working exactly what is defined in the requirement. We have to perform the requirement based verification of the model.

So this is the first set of the development cycle, where we are talking about the model development and model verification. So let's see what all activity is involved in these two stages of the development cycle. In this overall stage, what we are going to do is that we are going to capture the requirements. We will create the model, simulate it and verify it.

We will ensure there is a traceability between the model and the requirement to do the impact analysis. We ensure the conformance to the modeling guidelines, and we will do the multiple model verification activities, which include the functional verification, structural analysis like model coverage, as well as the formal method analysis, like design error detection. So these are the activities that we'll be performing on our model.

So let's start with the first stage of requirement to the model generation development, and then what all we can do at the model development level. So as you know, that we have the requirements, from requirements we develop the model. I assume that you have the basic understanding of how the Simulink models look like and can be developed. We develop a Simulink model from the requirements.

Once that is done, we have to establish the traceability between the model and the requirement. For doing that, we can use the tool like Simulink requirements. This tool will help you in developing or establishing the allocation or the traceability between the requirement and the models. This is an overview how the Simulink requirement to look like.

On the right hand side, you can see your model, which is developed in Simulink. Below the model, you can see all the requirement visible in the Simulink environment. On the right hand side of the screenshot, you can see there are hyperlinks. These links indicate that you can go from requirement to the model components, or the model blocks, or the signals, and from model you can go to the requirements.

So there is a bidirectional traceability that here has to be established, the moment you develop your model from the requirement. I will not go in depth of how exactly the tool look like. We are focusing more on the process in today's session. But, if you attend this session, the part two of this webinar series, which is on the 19th of October, you will come to know more about the functionality of the Simulink requirement and how it can be used.

Now, once we establish the allocation or the traceability between the requirement and the mode, the next phase of the development, or the next phase of the process, is to ensure that our model is conforming to the modeling guidelines. Now modeling guidelines are the set of the rules, whether do's and don'ts, that you have to perform, or you have to follow, when you are developing your model. Now these guidelines can be complex. These can be multiple guidelines available in the market.

For example, if you are working on ARP-4754, there is a different set of the guidelines. Generic software engineering, there are some sort of a defined guidelines in the industry, or if you want to follow that DO-178C specific guidelines, all these guidelines have multiple checks inside that, right? So once we had that, for example, for DO-178C, there are around 25 Simulink-related checks that we have to follow.

Now one option is to review the model manually against all these guidelines. But that's a very time-consuming process. So we need to use some sort of an automation, that can help us in reviewing our model against these guidelines. And that is where the Simulink check can help you. Simulink check is a tool which helps you in reviewing your model against the guidelines automatically.

There's a huge set of guidelines already available from the Simulink check, that you just have to select and run on your model, and it will give you the conformance support for those modeling guidelines. You can also use the tool for added time checks. The moment you are editing the model, it will ensure that you are not violating any guidelines.

So instead of checking it at the end of the development, you can check the modeling conformance during the development itself. Along with that, it generates a different type of the model metrics, that help you in reviewing your models, against these guidelines. Now, this topic, again, will be covered in detail, during the part two of this webinar series. If you want to know more about it, please join that.

Now, once our model is verified against these guidelines, we have ensured that the model is connected with the requirement, is following the modeling guidelines, we go to the next phase of the development, which is the functional testing or the functional verification of your model. Now, for doing the functional verification of the model, what we have to do is we have to first develop the functional test cases, or simulation test cases. These test cases are developed from the requirement document.

It can be a design requirement document, or it can be a test requirement document. We have to develop those test cases, maybe in the Excel sheet, or directly in the Simulink environment. These test cases then pass to the Simulink test tool. Now, we need a tool which can execute these test cases on your model, and compare the output results, and can generate a pass/fail criteria report. This is something that can be done by the Simulink Test.

Simulink Test helps you in importing or authoring your test scenarios, linking these test scenarios with the requirement, because the relationship between the requirement and the test scenario is very important, running these test scenarios on the model, and generating the report, out of that testing process, right? So as I mentioned, this tool will help you in creating the test harness out of the model, or the model component that you want to test.

It helps you in authoring or importing your test scenarios, and then running your multiple test scenarios all together in the single environment and generating the proper documented report out of those test scenarios, indicating whether your model passed the functional testing or not. Now, once we verified our model against the requirement, OK, we have completed the requirement-based testing, there's a very important activity which is still left.

And what that is, the coverage. So once we are running our requirement-based test cases on the model, there are a lot of the questions in mind, right? First thing is that, did I miss any of this test case? Do I have useless blocks inside the model, right? Which is actually maybe a dead logic.

Or is there something which I missed at the requirement level itself? In any case, if any one of these three is true, what you will get is the missing coverage. You will find some part of the model, which is not able to, is not being executed by the test cases that you developed. And that is exactly why we identify the model coverage, while we are running our functional test cases.

If you find a missing coverage, that means you have to look into that and understand why is there a missing coverage at the my model level? Are my test not sufficient enough to test the model? The requirements are not defined properly, or my model has some unintended functionality that I have to fix. So the Simulink coverage is a tool which helps you in identifying the model coverage.

Those who are not much aware of the coverage topic, there are multiple types of the coverage that we can identify, like whether all the statements inside my model and the code are covered, whether all the decisions are being covered or not, conditions are covered or not, or modified condition and decision coverage, do I get the coverage or not. Now again, we will be covering these topics in detail, on the second part, about how we can identify the coverage and how we can analyze the results of the coverage, once we identify the coverage data.

You can find the coverage, and you have to find the coverage, of different modeling and the code components, whether it is Simulink blocks or a MATLAB code running inside the Simulink, or it's the state chart or even the generated or manually written C code. Any code or any part of the design has to go through the coverage analysis to ensure that your test cases are good enough to test your model. Now once we identify the coverage at the model level, and let's assume that you have the missing coverage, right? We need to ensure that we have a good coverage, and different level of the certification.

For example, design assurance level A of DO-178C asks for 100% MCDC coverage. So if you are at level A, you have to ensure that your model is following 100% MCDC coverage. So, in case if you have a missing coverage, we need a tool which can help us in identifying those missing test cases. At the same time, we also need to do a bit more robust testing, robustness testing, at our model level.

And that is what the Simulink Design Verifier type of the tool can help you. These tools use the formal method analysis in the background. So what these tools can do, or the Design Verifier can do, is it can generate the test cases for the missing coverage, what you have, so that you can review those test cases and integrate it, either in your requirement or in your functional test cases, that you missed initially.

At the same time, Design Verifier also identifies the design errors on your model. You can provide your design constraints to the Design Verifier, and ask the Design Verifier to ensure that model is following those constraints, right? So just having a glimpse of what all model Design Verifier can do for you, or you have to perform at your model, for insuring your design is robust, is property proving, OK?

So when I say property proving, we have our model, which is expected to work in a certain fashion, right? When it is going to be executed in the final hardware. We have to ensure that our model is following those properties, and there is no scenario at all available involved which can violate those properties.

OK, so this type of the automated analysis, automatic analysis, abstract interpretation, that we have to perform at the model level, right? You can provide your own objectives and the constraints at the model level, or the design, and ask the Design Verifier to generate the test scenarios accordingly.

And very important part, it can detect the design errors, like overflow and overflow divided by 0, the array out of bound. This dead logic, this type of the issues can be identified at the model level, without even providing a single test scenario. So we have to perform, not only the functional testing coverage analysis, we also have to perform the formal method analysis at the model level, to ensure that there is no runtime error at the model level.

Again, it will be covered more into the part two of this webinar series. Now, if you see the complete model level development and the verification activity, so if you see, we have the multiple tools and more insight that, we use Simulink for developing the model. We ensure that the Simulink models are connected with the requirement. There's a traceability established using Simulink requirements.

We tested our model against the requirements using Simulink test, and, at the same time, identify the coverage, the test coverage, at the model level, using Simulink coverage. Use Simulink Design Verifier to ensure that our model does not have any runtime error, or the design error, and at the same time, generate the test cases for the missing coverage. We also ensure that our model is following the modeling guidelines, using the Simulink check.

All these activities, when we are performing, we generate the different type of the evidences. And we follow a few of the objectives, which are mentioned in the DO-178C. These all activities that we perform follow the objectives defined in Table A3 and A4 of the DO-178C, and, in this list, though the text is small, you will get these slides later, so you can have a detailed study of it.

But there are other tables as well as the objective number mentioned, that for that you can claim the credit if you perform these activities. So the tools that we use throughout this process, obviously, automating the manual activity that you would have to perform if you do not have these tools. So as per the DO-178C, we have to go take these tools for the qualification. So all these tools, what I mentioned throughout the cycle, have been qualified in the industry, and you can qualify it for your programs also.

Again, this whole cycle will be covered in the part two of our webinar series. Now, when we are doing all these activities, a very important part of any certification workflow, it is important as well as it's very tedious and time consuming part, is to generating the evidences, the reports that can prove that you have performed these activities, so that you can take those reports and claim the credit against the objectives that I mentioned in the previous slide. So throughout the cycle, we use the different tools, different activities we perform at the model level, and all these activities will generate the reports for you, which includes the simulation results report, which is a functional testing pass/fail criteria report, coverage report, conformance to the standard report, ensuring that the design does not have any design error, as well as the system design description report.

All these reports will be generated automatically for you using these tools, which you can submit as an evidence to claim the credit for the different objectives that I defined. Now, once our model is verified, we have done all these activities asked by the DO-178C at the model level, we go to the next stage of the development cycle, which is converting the model into the source code and taking that source code and compiling it, and then converting the executable code. But when you are developing the source code, or generating the source code out of the model, you have to ensure that the source code is traceable to the low level requirement, which is nothing but the model, and, in turn, also traceable to the actual requirement or the high level requirements.

We have to ensure that our source code is also following the modeling, sorry, the coding guidelines. We have to ensure that the behavior of the source code or the functionality of the source code is same as that of the low model, as well as we have to perform the structural verification of the source code against the model. We have to ensure the source code behavior is the same as that of the model, as well as with the requirement.

And when we are generating the object code out of this source code, we have to ensure that the behavior of the object code is same as what is expected from the requirement, because that is our end code, which is running on your microcontroller. So we have to perform this testing again at the source code, object code level. Now we'll see how we can perform all these activities and automate these activities in MathWorks environment.

So we developed the model, which is the low-level requirement. We have verified it in the previous section. Now, from that model, we can use Embedded Coder to generate this source code. Now the source code is an ANSI C code, which you can take to any microcontroller, as you wish. We used the Simulink requirements, which we used to establish that traceability between the requirement and the model. We can use the same tool to establish the traceability between requirement and the source code.

At the same time, we have to ensure that the generated source code is exactly same as that of the model. So we have to ensure the structural equivalence between the model level and the source code level. So for example, if my model, if I have an addition operation, I have to ensure that there is an additional operation generated at the C code level, right? So that type of the traceable structural equivalence is required, to ensure that, while the code is being generated, there is no unintended functionality or bug introduced, during the code generation process.

And this is what is being asked by the authorities to ensure that it is not done, there is no defect is being introduced during the code generation. You will be learning more about the code generation and structural equivalence and the traceability between code and model during the part three of our webinar series, which is talking about the code generation and assuring the code quality, which is on the 27th of October. Now as I mentioned, Embedded Coder is the tool that we can use for generating the code out of your model.

As you can see on the right hand side, whatever you define at the model using the Simulink blocks will be, as it is converted into an equivalent C code, which you can take to the cross compiler, compile it and dump it in your microcontroller. But when we are generating the C code, again, we are automating this process, so we have to ensure that the tool is not injecting any issue or any defect inside the C code.

And for doing that, we have to show the code equivalence to the model, or the low level requirement. For doing that, we take the model at the C code independently, and use the internal representation of these two artifacts of our development cycle. And then we compare it, line by line, and ensure that each block inside the model is generated with the equivalent C code and every line of the C code has the equivalent section inside the model. So that bidirectional matching, we have to perform at the model at the C code level.

And while we are doing it, we generate the code inspection report to ensure that the C code and the model are exactly same, and the traceability report so that we know that every part of the model has a C code, and each part of the C code has an equivalent part inside the model. So these two reports have to be generated to ensure that the C code and the model are exactly structurally equivalent. And for automating this whole process, we can use Simulink Code Inspector.

It's a static verification tool which checks the generated code against the model. And this activity is being asked by the table A-5, which is a part of the verification activity, as per the DO-178C. Now, once our C code is generated, we have verified that the C code is structurally equivalent to the model. Now we have to ensure that the C code is functionally equivalent to the model also.

There is no difference in the functionality of your generated C code and the model that you developed. And for doing that, we use the same tool that we use for the model testing, which is nothing but the Simulink test. At the same time, we can actually use the same test scenarios that we develop at the model level. The same environment, same test scenarios, what you have to perform is just replace the model with the equivalent C code, which is, again, an automated process, and the same test scenarios will be run on the C code level.

And the results of the C code will be compared with the model outputs, to ensure that functionally the generated C code and the model are equivalent, and there is no functional bug or other defect introduced during the code generation. Now once that activity is done, our source code is functionally equivalent to the model. We also have to ensure that our source code is also getting the sufficient coverage.

So we use the same Simulink Coverage tool, and the same testing environment that we used for the model, to identify the coverage at the source code level. At just a high level overview as I mentioned, that the level A of programs need 100% MCDC coverage, while level B do not ask for 100% MCDC, but ask for 100% decision coverage, and so on. So all these type of the coverages can be identified at the C code level, same as we identified at the model level.

Now we have ensured that our model is functionally same as that, our C code is functionally equivalent to the model. We are getting sufficient coverage at the C code level, we can take the C code, pass it to the cross compiler, and generate an executable object code for it, right? The code, which is going to run on your actual microcontroller. Now, once we do this cross compilation, we have to ensure that after this activity, the functionality of your object code is not changed, is it, it is still same as that of the C code and at the model level.

For doing that, we use that same environment, which is Simulink Test, with the same test scenario that we develop at the model level, use it at the C code level, to ensure that the functionality of your object code is exactly same as that of the source code. In turn, it is same as that of the model. And as the model is verified against the requirement, what we are ensuring is that our executable object code is functionally correct, as per the requirement also.

So everything is done using the same set of test scenarios that we developed in the single environment that we use for the testing purposes, right? So you do not have to deal with the multiple environments and the multiple test scenarios for conducting these activities. As I mentioned, for doing this, we can use the processor in loop environment. Here your code will be executing in the processor environment, whether it is in hardware or in equivalent hardware simulation environment.

And that triggered, send the results of those simulations can be seen at the Simulink level. So your code is running on your hardware, but it can be controlled from the Simulink, to be so that we can ensure that the behavior of your code and the algorithm will be same, even at the hardware execution level. Now, what we have seen is we verified that our generated C code, and then the generated object code, is functionally equivalent to the model.

Now we have to perform some checks at the code level. We have to ensure that our generated code is following the coding guidelines, OK? So when I say the coding guidelines, we can use the MISRA C, which is the industry standard, industry adopted guideline. We can identify, we can use this whole thing as an automated process, using the Polyspace Bug Finder, which can check your generated code, or even manual retention code, against the modeling guidelines.

Not only that, once we ensure that our tool is, or our generated C code is following the C guidelines, or the coding guidelines, we also have to ensure that our generated C code has no error, right? So we have to prove the absence of error. That is something that we can perform using the Polyspace Code Prover, which in turn used the formal method analysis to identify any bug inside your generated C code or manual written C code, right?

So as I mentioned, the two tools that we are going to use for this activity, are the Polyspace Code Prover and Polyspace Bug Finder. If we dig deep inside that, I will not be going into in-depth of each and every activity. We have a different session for that. But Bug Finder at high level identifies the defects and the vulnerability inside the model. It checks your, sorry, inside the code. It checks your code against the coding standard, and identify the code metrics, that you can use for reviewing your code, while the Code Prover does the more detailed analysis at your C code level, for identifying the issues, like divided by 0, unreachable, or the code, or the dead logic, asserts, buffer overrun, data overflow, et cetera.

So this type of the detailed analysis, using formal method, can be performed at your C code level, also, like we did at the model level. To know more about how this tool can work, and what type of analysis it can do at the C code level, we have a session on 27th of October, which is the part three of this webinar series. Please do attend. It is giving you the detail about this section of the workflow.

So, as you have seen, overall we generated the C code out of the model, using Embedded Coder. We ensured that the source code is traceable to the model, as well as to the requirement. We tested the source code against the model using Simulink Test, identified the coverage at the C code level, generated the object code from the generated C code, and then tested the object code against the model, or in turn, against the requirements.

We used Polyspace Bug Finder to ensure that our generated C code is following the modeling coding guidelines, and the Code Prover to ensure the absence of bug for the generated C code. Now, while doing this, we used the multiple tools. And, again, these tools are qualifiable, or being qualified for the different programs globally.

And you can qualify it for your programs also. Again, when we are doing all these activities, there are evidences which are being generated. These evidences are generated for the objectives across table A-5 and A-7, write code inspection report for the standards report, runtime error report, executable object code functional report, which is a pass/fail criteria report, as well as your C code coverage report.

All these reports and the evidences will be generated automatically for you, that you can submit for the certification activities. Now this complete workflow that I was talking about, we divided into two sections, model verification, model development, verification validation, C code generation, and the verification validation of the C code level, and the object code level. All these activities are consolidated, the whole chart is consolidated in a single chart.

It is available on our website. You will get a link to download that, too. This chart has the two sections, again, as I mentioned, covering the model development and verification, and the C code development and verification. So you can go in detail about all these activities and the objectives for which you can claim the credit, if you are following these activities, and performing these steps.

Again, it's a busy slide. We don't have to go through all this. But this slide will give you an idea about all the tools that we used. You will have the subsequent sessions in this webinar series, that you can utilize for learning more about these tools and its uses and the functionalities. All the tools mentioned in the green are qualified tools for the DO-178C, which are already being qualified for the different programs. So you do not have to-- you can use the reports generated by these tools directly as an evidence.

Now, I talked a lot about the qualification of these tools, right? So what exactly do I mean by the qualification. So any tool that you are using for automating any manual process, which is supposed to be done by the humans, right? Has to go through the qualification, to ensure that the tool is not injecting any error in there, in your program, or failing in identifying any error or the issue or the defect in your program, or in your code.

So you can perform this qualification activity, because you are working on the safety critical application, and authorities ask for it, or just because you want to have more confidence on the tool for the internal quality reasons, right? So if you attain the tool qualification approval, as I mentioned, you can get the credit for all the activities that you have performed, and automated those activities. You can claim the credit for that also.

There are some more ideas about it. So, overall, the tool qualification, as I mentioned, divided into the two categories, two criteria, whether your tool is falling into the development tools criteria or the verification tool criteria. Based on these tool criteria, DO-333, are derived five different tool qualification levels, right? So based on whether you are using development tool or you're using verification tool, which design assurance level you are working on, you have to follow the process for any one of these five tool qualifications, tool qualification levels.

We will be talking more about this, during the last series, last webinar of our webinar series. But at very high level, MathWorks do provide a DO tool qualification kit. This kit has the documents and the templates, which help you in qualifying your tool, right? It has the plan templates, like tool qualification plan, tool operational requirements, all these templates, which we will be using for qualifying the tools, are pre-built for you, so you can just use it as it is.

It has the test cases available, which can help you in testing the tool itself, not your algorithm, but testing the tool, along with the expected results. So if you use this kit, you have to click a button, and all these reports will be generated automatically for you, that you can submit for that tool qualification purposes. Once you qualify the tool, you can use the reports generated by the tool qualification, sorry, the tool for your design, for claiming the credit for the objectives.

Again, you will be learning more about the tool qualification during the part four of our webinar series, which is on the 10th of November. So if you are working on the tool qualification activities, audits, et cetera, please do attend this webinar. Overall, our workflow, starting from high level requirement, all the way to build the object, executable object code, looks something like this.

This slide has, you know, all the tools and the objectives mentioned here, which we can claim credit for, if we are following the workflow exactly what I mentioned. Now, if you are new to the model-based design or the MathWorks tool-chain, we do have the training services which can help you in ramping up quickly on the tool and the processes that I spoke about. We also have the consulting services, which can help you in identifying the gaps in your current workflows and helping you in adopting the model-based design for your programs, which can adhere to DO-178C, or any other standards that you are following.

Along with that, we can also work with you in having a more detailed session on the topics, like ARP-4754, DO-178C, or DO-254. We will be talking in depth about the tool functionality. Taking a good example on that, we can have live interactive demos where that different personas of the engineers, like safety, quality, and the development engineers, can join and understand about the DO-178C workflow.

Now before we move to the question and answer, I would like to understand what next you want to perform in this activity, whether you want to talk with our MathWorks Certification Experts, to understand your workflow and suggesting you how we can fill the gaps in the workflow and make it quick and easy, or you want to connect with our trainers to upskilling your teams on the certification-related topics.

Or do you want, if you are new to the model-based design and the MathWorks tool-chain, you want to try and evaluate the workflow that I spoke about, or that Satish spoke about, on your program, or on your demo program for evaluating it. So please do answer that. It will help us in supporting you better, going forward. Now, while you are answering to this poll, we can go for the question and answer part.

Satish, do we see any good question that we can answer?

Yes, go to-- can you hear me?

Yes, I can hear you, Satish.

Yeah, so we have lots of interesting questions. I see questions pouring in in private mode as well to me, you know, to all the panelists basically directly.

OK.

We, you know, ran over time. We are already over 1 minute. But let's just take one quick question, right?

Right.

There are a few points around hand-code integration, how can I bring in, if I already have hand code written in C or C++. How can I bring it into embedding environment. And associated questioning is, you know, how do you see mapping between embedded coder with code inspector? Right?

OK. OK.

So, yeah, if we can combine those two, and then I'll take that part, I think that should be good.

Sure. Thanks, Satish. So the first question, how we can bring in your already developed manual code or legacy code into the Simulink environment, we do have multiple ways of doing it. We do have the S functions, which can use to bring in your existing code as a block inside the Simulink, or even the nowadays C code library blocks are available, which you can use to call your C code inside the Simulink environment, right?

You just call the function, define the C file which you want to call function from, and then your block will behave like a C code that you have already developed. The same block can be used further for the code generation, while you are going for the Embedded Coder. Now the second part of the question which Satish asked, is that what is the integration or the relation between the Embedded Coder and the Code Inspector?

So that's a very good question. So Embedded Coder we use for generating the C code out of the model, while the Code Inspector reviews the generated C code for the structural equivalence between the model and the C code. Now, why are we doing that? Because the certification authorities asked to ensure that any tool that you use for the automation has to be verified. The output of that tool needs to be verified.

So the output, which is generated by the Embedded Coder, is being verified by the Code Inspector. Now, we could have you qualify the Embedded Coder itself, so we do not have to verify the output of Embedded Coder. But that takes the Embedded Coder as a development tool qualification category, right? This takes us to the TQL 1 or 2, which is a very difficult activity to perform.

So what we did is that we use Code Inspector to review the Embedded Coder output and qualify the Code Inspector, which falls under the verification tool category. And that is why we can qualify it either at the TQL4 or 5 level, which is way quicker and the less time consuming activity, than the development tool qualification.

Sure. Thanks, Gaurav. Still there are a lot of questions or probably, I think we would need to follow up and provide answers to all the questions, right? But at the outset there are two or three questions related to FPGAs, SOCs, and ASIX. So in today's session, we are not covering, we're not talking specifically about how to address it if we have either FPGAs or SOCs in the loop, right?

Especially if your question is more from HGL code generation. But that is something that we can have offline discussion and then walk you through that process, if you are talking specific to DO-254, right? So I think we will stop here, Gaurav.

Thank you, Satish. Thank you, everyone, for joining the webinar. We'll get back to you, if we are not able to answer your questions during this session. We'll get back to you with the email and answer that. Please do write to me or Satish if you have any question related to DO-178, or any aerospace standard workflows. We'll be happy to connect with you.

Thanks. Thanks, everyone.

Thank you.

Enjoy the rest of your day.

View more related videos