Model-Based Design with Automatic Code generation - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 1:03:23
Loaded: 0.26%
Stream Type LIVE
Remaining Time 1:03:23
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 1:03:24

    Model-Based Design with Automatic Code generation

    Overview

    Embedded code generation is fundamentally changing the way engineers work. A majority of Embedded controller development applications demand a high product confidence. Automatic code generation is an essential step in the efficient adoption of Model-Based Design as it enables the user with advanced verification and validation techniques to produce high-quality code, reducing the number of iterations in a typical industry-based product development cycle and eliminating errors introduced due to manual coding.

    Post designing the algorithm in a closed loop inside the simulation world, one needs to perform various rigorous verification and validation activities to test the algorithm's robustness before one tries to generate the code for the same automatically. These features of Model-Based design framework help engineers to increase productivity, improve quality, and foster innovation in their projects.

    You will learn how to use different features of Model-Based Design with Automatic Code generation to generate robust code that meets industrial standards.

    Highlights

    • Preparing the Model for Automatic C Code Generation
    • Applying customizations based on software architecture for Code Generation
    • Perform Back-to-back testing of generated code and model for consistency
    • Generate code that complies with Industry guidelines and standards for production code

    About the Presenter

    Gaurav Ahuja is an experienced application engineer at MathWorks India, where he has been helping customers successfully adopt MATLAB and Simulink products for over five years. With a focus on assisting customers to develop highly safety-critical systems across all industries and adhere to their industry-specific safety standards, Gaurav has worked with customers in a variety of industries, including automotive, aerospace, railways, medical devices, industrial automation, and consumer electronics. He is skilled in model verification and validation, auto C code generation and certification, and is always eager to tackle new technical challenges. In addition to his extensive experience, Gaurav holds a master's degree in embedded systems from BITS Pilani and is strongly dedicated to helping customers succeed.

    Recorded: 13 Apr 2023

    Good afternoon to all, and welcome to a webinar on model based design with automatic code generation. This was a two part webinar series. We had a successful session yesterday, and today, we'll be focusing on the model based design with automatic code generation as the topic. The agenda for today's session-- this will be a total of 60 minute session, and we will have the Q&A towards the end, in the interest of time.

    So here's our speaker for today. Gaurav Ahuja is an experienced application engineer at MathWorks India, where he has been helping customers successfully adopt Matlab and Simulink products for over five years. With a focus on assisting customers develop highly safety critical systems across all industries, and adhere to their industry specific safety standards, Gaurav has worked with customers in a variety of industries, including automotive, aerospace, railway, medical devices, and consumer electronics. I would now like to hand it over to Gaurav.

    Thank you, Prashant. And hello everyone, good afternoon. Let me just start by sharing my screen. All right, I guess you can see the screen now. Just give me one more minute. All right, let me-- I guess that's it. Thanks, Prashant, for setting up the stage. And as now we know that this is the second part of the two part series. The series was about developing the embedded controller for the power control-- power converter application.

    And the second part is on understanding what do we mean by model based design. Or what is model based design for developing the controller application, and using the automatic code generation, and understanding the complete context and the concept, and how we can go about it, and how this helps us and eases our process.

    So in the yesterday's series, if you had attended that, we talked mostly on the-- modeling the controller. The challenges that you would face when you are planning to model and design a power converter application, and also designing the embedded controller for that. Today, we'll focus on the code generation. So yesterday, my colleague, Ramanuja, gave a great in-depth view of MathWorks solution on that. But don't worry, if you haven't attended that, we would be assuming that now, at this point in time, we have made the model. And we are trying to apply the concepts of model based design for the code generation, for making this as a robust, safe, and reliable product.

    So I try to do a quick recap wherever necessary, but feel free-- do not worry if you have not attended yesterday's webinar. So to start, I would say all electronics now can be found everywhere. The increasing demand for more and more electricity, be it the reason for more and more features or advancements, or decreasing the emissions, we do find ourselves in a situation where more and more things are driven by electricity. And in that case, have to be-- there have to be systems that are controlling this. And that's where we want to develop and make our product. And then you do find that this would go into the places where if this product fails, this, in many times, can be a catastrophic event.

    For the least, it could definitely do more damage to a lot of equipment, a lot of loss on the cost on the equipment. But definitely in certain cases, it could prove to be a very fatal and catastrophic condition. So we want to make our power electronic system which is safe, which is reliable. Now we did do, yesterday, a discussion on how to model this, but now we'll discuss more on how to make this reliable. And what is model based design? How can that help us to make this more reliable?

    So if you say more and more electricity, or electrification, this whole trend will-- you'll find yourself surrounded by electronics application if you are working in any of these areas. If you are in the electricity generation area, or the transportation area, or automation, or consumer electronics. So we see-- and we have the use case of energy generation.

    There are a lot of power converters for electronic applications, a lot of controllers for that. Definitely for the mobility, electrical vehicles transportation, and then obviously on the automation side, and different kinds of consumer electronic side. So if you feel you are working in this area-- in these industries, and you are making software for the safe operations of these power converters, these power electronics, you would find this webinar very helpful.

    So in every scenario, power electronics would need a controller. And this is what we would want to design and then implement on an embedded target. So we'll get to electricity from somewhere, and then we want to control the output electricity. And the control would be done-- the switching would be done by the physical components-- the power electronic components. And the controlling of those components would be done with our digital controller. And since it's digital, we would want to write a software for it. But why is that needed?

    To understand certain challenges on what are the major things that people-- electronics engineers are trying to achieve, and why do we need a complex software? Why do we feel that this is not a simple task? So if you see these points, it is important while writing the software, all the red highlights that I have given are the key motivators for an engineer to develop more and more robust software. So we do need to understand the quality impact, invariable power sources, and different kinds of loads that we might be attaching to our system, our product.

    We also want to compensate on the degradation of certain products. Let's say I'm trying to control the motor speed. If the motor gets degraded, how should my controller operate? I want to develop and tune my controller, so write a software so that it can work in different kind of operating conditions. Again, it's important for me to write and test my software for the faults, and also look at the supervisory logic, whether my-- all the safe operating areas-- I'm always working in those areas. So I need to verify my supervisory logic, have to understand what kind of steps my controller can go in.

    And I do need to correct my software. This is also very important. This is probably, in the traditional sense, the most important thing. When I start to control this hardware from my software, and when I do this integration testing, I see-- if there is a failure, I need to understand how do I correct it. So this is where the engineer needs to have an in and out depth knowledge about what kind of software he wants to write, and how he wants to control the converter. And this is where we want to bring in the concept of safety and reliability for the operating product.

    Now in the traditional sense, definitely validating this unwritten code of software, all the hardware is important. But as I said, if we got the bugs at this stage, when we have everything complete at our hand during the integration, then those issues-- those softwares are the most expensive to fix. In many cases, if we got a defect in the software at the end, it might lead to a failure of the hardware. And hardware, as we know, is very, very costly.

    Also, in many cases, since it's a power electronic application, you're working with very high voltages, very high currents. So if it fails at the integration, it can anyway prove to be, in many scenarios, catastrophic. So what's the way? We definitely want to go out from the hand written code, and we want to see or understand how we can use the concept of automatic code generation and do early verification.

    So we'll try to go in depth into these concepts right now. But at this time, I would want to start a poll. You would see the poll coming up on the right hand side of the window. And you would see there are many questions over there. But feel free to just answer whether the applications that you are building, are they more or less safe to critical, or do you feel it's-- the failure-- it's OK if it fails?

    So based on that, we'll try to work and go about the session. Feel free to leave the question number two unanswered for now. That is fine if you've answered. And if you feel that you're not able to answer because that's for all of these things. And I want-- as and when we touch upon these concepts, you can start filling in your answer for question number two. So question number two is a multiple correct-- I mean, that there's no correct or wrong. But you can have a multi choice answer for that.

    So to do this study today, we'll take an application. But before that, we want to understand how the Simulink and model based designed help us, and how it is a better way. So model based design can be understood as a CAE approach. So CAE is a common term used to say it's a computer aided engineering-- or computer assisted engineering approach, where I want to take the help of a computer, or a computational device-- to first of all, simulate, let's say, a system level or a behavior level simulation of my system and then take the advantage of that to develop my algorithm.

    And since I would be in a position where I might have already made the behavior using whatever way in the computer or computational device, is it possible for me to take that behavior and convert it into an embedded software, which I can target for my embedded hardware? So this is a classical compiler application, where you are building something in one kind of language-- in this case, behavior language. Or in this case, Simulink is a graphical language. So you define your behavior in the graphical language, and then you want to convert it into a different language, which is, for us, like C code for us to put it on to our embedded target.

    So it's a classical compiler application. But this is where model based design comes up in the picture. And now with model based design, you would be able to design, prototype, and generate code. So these are the three key concepts I would like you to remember. And in model based design, since we do have a behavioral model even before we have the code, we can start to test our critical software and critical algorithms even before we go to the hardware phase. So these are the key concepts. This is where we say model based design can help you develop in a better way.

    And it's being used in the industry. So just to give you an anecdote from GE Grid-- if used model based design to develop controls-- complex control systems. And if you read the bold text, it says, we've eliminated months of hand coding by generating code from our models, and we use simulations to enable early design verification. So definitely it's for the production workflow that's being used in the industry. And it's being used by the pioneers of all electronic application companies. So let's see how they use this concept to their advantage.

    So today, we'll try to take an example of a DC/DC LED lamp application. And so this is for the headlight of an automotive. This might sound that this is an automotive application, but to be true, this is more-- or I chose this to bring out the safety concerns of the failure of such an application. So this is where the intention is that we want to switch off the LEDs of the headlamp in such a way that it does not blind the person on the opposite end. So definitely there is a safety critical aspect to it. It should not fail, and it should work in the right way. We need to get more and more confidence on what we are trying to do. So we'll see how model based design helps us do all of this.

    So this is just a depiction of how this works. Of course, we might not be using such a complex model, but this is the kind of controls-- we'll try to see how we can develop the controller for switching these LEDs on and off in this manner. So obviously, now SH Simulink is a controlled language. We can add controls into Simulink using different kinds of blocks. So there are PID blocks, that are different out of the box controllers, more predictive controllers. You can use a state machine where you can write different states for your supervisory logic, and then you can bring in your hand written C code if you want.

    So this feature came up in 2018 B, and from 2020 A, we have what we call as a C Caller block. So if you want, you can bring your C code. If you want, you can design in Simulink, or even in Matlab, or in Stateflow. Now we discussed yesterday that using these, we were able to develop this. So this is a recap. So this is a closed loop controls application that you can see for this purpose. And yesterday, we had a discussion how we would go about designing this.

    We are just going to see inside. So this is the application software. This is the end logic that I want to generate the code for so that it goes onto my hardware. And there are different loops. Control loop, voltage loops that you can see. So this is just a recap. We'll try to use these models and try to see how we can implement the safety critical-- achieve the goal of ensuring the safety of this.

    So we can simulate this. So this is, again, a closed loop scenario where with Simscape, we have made our plant model. And this is where if we have made a closed loop voltage closed loop software, which we want to generate the code for. We want it to go in our end better target. I'll just skip forward. So when you simulate from this scope, this closed loop, you can see the output. Now let's move forward.

    So model based design says that each and every step in your development cycle should be in such a way that as and when you move forward, you are more and more confident on what you're building. What do we mean by that? So it says that at every step, there should be a mechanism to go back and check closely that I am on the right spot. And how does my development generally start? It would start from requirements.

    So it's important our model design makes us-- or helps us keep a track during our development of the requirements. We'll look at that. So the first step is having a clear traceability between what I am building, what I have built, and is it matching to the requirements that I originally had? Did I lose the track somewhere? If I do that and I realize it later on, then it might be a very costly affair to fix. So in a key nutshell, you want to place the requirements, we want to based on the requirements, then, simulate my design and do a closed loop simulation. And in doing that, we want to do the testing of our algorithm in closed loop, or continuously, based on the requirements. So we'll see how we do this.

    So first of all, in Simulink, if you are familiar with the Simulink Canvas, we have what we call as a requirements perspective. So if you see these three gray dots over here, when you click on it, you'll go to a requirements-- or the perspective view. Based on the workflow that you're using, there are any number of perspective views available. And since we want to work with the requirements-- so there is a specific view called requirements view, or requirements perspective.

    So let me just show you. So once you are working on core, then there is core perspective, and there's perspective for testing. So as and when you're working on different workflows, there are different perspective that changes the Simulink Canvas in such a way that brings out the most important options for that particular workflow.

    So let's go to the requirements perspective. Now if I pause over here, what I have in the bottom is my requirements state. So this is nothing but my textual written requirements that were defining my-- guiding my design-- that were defining the specifications of my design, what and how I should be building. And this could be available with a team in, let's say, a Word document, a PDF document, or an Excel sheet. Or there are certain requirements management tools where generally, the enterprises hold and manage their requirements. What we can do is bring on, or import the requirements from these areas in the Simulink environment. And once I have that, this is how I see it.

    The next part over here is this bar graph, which is implemented, and the next one says verified. So the implemented bar graph tries to give me a progress of how much of these requirements have I implemented in my design, or in my canvas. Now how does the tool know about the implementation? So we need to link these requirements to our blocks, or to our graphical design Simulink blocks. And as and when I do that, the tool starts to take that into consideration, and I can understand that it's being linked.

    Once I link it to a block, you see this document icon. This new icon starts to appear on those blocks. So visually, I can see what all blocks are linked to the requirement. And visually, I can see what all requirements are linked to the blocks. Let's just play and see all here. And one more thing before I begin-- so as and when I click the requirement, the right hand side, this window shows me the details of that requirement. So this is the text you can see. Then there is this bottom area called links. So as and when I'll be linking, it will start to populate information over here. So let's just continue to see what's happening.

    So if I open-- so these are different state flow blocks and transitions which are already linked to the requirements. So I can visually see in this perspective that how many of my model elements are linked. So this helps me understand that, again, confidence that I'm actually working on my design in accordance to my requirements, and that I have not deviated. Now let me pause at this point once again.

    And now for fuel requirements, you will see that I have this verified column also showing me some progress. And which is where it's showing that it's been verified by certain test cases. So not only I'm saying that I am making this block as per certain requirement, but I'm also saying, I would be testing this block according to a requirement and test specification, which would be this particular test case.

    So not only I can link my requirements to the model, I can also link my requirements to the test cases, which is required when I'm doing the functional testing of my design. So that if I am confident that I've checked and tested all of my test cases, I can very well say that I've completed my functional testing. But let's look at that.

    This is also another view on the requirements-- this called Requirements Editor. This is where you can see the whole requirements set. You can even link a low level requirement to a high level requirement, so that if you are deriving few detailed requirements out of a high level requirement, you can link a requirement back to a requirement or to a design, and you can make a complete hierarchy and then practice. So you see that we can make the requirements from Word, Excel, or requirements management tools like DOORS, Polarion. So generally, these tools use requirement interchange format called ReqIF. That's a standard format. And whatever tool use that, we can also import requirements from that.

    The next thing what I want to do is make sure that I'm also following the right guidelines for making this. Is the model readable? So if a model is not readable-- if I'm not following certain, let's say, industry standards, I want to understand how good is my model. What's the quality of the model? So that my model becomes more maintainable. So if it's not readable, then it's less maintainable. If somebody who's developed this model leaves and a new person joins my team, if he's not able to understand what the other person has made, then it's very difficult to maintain. It's very difficult to test.

    So there are certain standard checks I can run on my model. And then I can also check the matrix in a graphical way to see the health of the model. How much of the safety critical checks I'm already following? How much of the map guidelines is a model following? So my guidelines are towards the readability and quality of the model. And once I say that I have statically analyzed the model and I'm following all the right practices, now I want to capture the defects into my control.

    So I can use the formal method based tool design and do design error detection. But as a formal methods based tool, it's mathematically tries to understand the controller logic or the algorithm. And then based on the different input ranges, or inputs available, it tries to take the ranges and tries to see a complete state space analysis that whether or not there are defects like division by 0.

    So if I've given an input, which is in eight, then there could be a value, which is 0. And if it goes to a division block, that could be a division by 0. I can find defects like adding overflow, or and different other kinds of overflows into my design, which if I do not have a requirement against that, I will not have a test for this. So generally, these are the defects that go into the end application, or goes into the-- sometimes in the field, and we encounter them as a runtime error.

    The other thing that I want to do at this point in time is-- definitely I have now checked the defects, but some of my requirements could be very safety critical. I can model these requirements and give it to the tool and ask it, will my design always satisfy the safety critical requirement? If not, the tool can give me a counterexample, or a counter test case. But under this input scenario, this safety critical property would be falsified. So this is, again, very important in the supervisory logic because if the states are not being reached properly, or if the controller gets stuck in the particular state, it can, in some times, lead to a non intended behavior, which, again, could be catastrophic. So definitely check your supervisory logic, or the mode logic which governs the operation of your power electronics, that it is free from defects and it is free from-- or it's always compliant to your safety critical considerations and the requirements that you need.

    So the next thing-- I mean, we can use-- so all the checks that we discussed on. So this is Simulink check, which shows all these checks, and you can run them. And the other-- the formal methods based analysis-- static analysis on the model can be done by Simulink design verifier, and both of them, you can use. This is all the kind of look, and they can create different reports. You can find-- since we want to also generate the code from this model, you can find whether there are any missed revelations that we can detect at the modeling front at the modeling level, so that if the genetic code can be identified and can be corrected, that the generated code would be more and more compliant to MISRA.

    Now the next task is testing functionally. So we have tested right now my model statically for defects. Feel free to keep in the questions. And coming up in the question/answer panel, I would take a look at them and we would answer that. But now what I want to do is I've done my static testing, now I want to dynamically test my model based on my requirements. So the way I've written my requirements, from there I've derived, let's say, a test specification sheet in which each and every district, or a set of district corresponds to checking a particular requirement.

    To that, I want to isolate my embedded-- or my component of the algorithm-- which could be one unit, or which would be a complete component-- into a different environment so that I can only give inputs to that particular component and only get the or log the outputs of that particular component. And then when I do the testing, I can say whether this component is working fine in my algorithm or not. So let's see how we can do that.

    So we can do that using Simulink test. And that's a feature of Simulink test called Test Manager. And there's another feature of Simulink test called Test Harness. So Test Harness is making a isolated environment from a component where I can do my testing. And Test Manager is like a framework that will help me do my testing in bulk. So let's say I've made a number of Test Harnesses which are trying to test different kinds of requirements. Different test cases are there. If I want to run all of this as one suit that I can use the framework for my Simulink test-- for the Test Manager. And of course it does a lot more. We'll dive into that just to show on the face of it, how does it look like. And we will try to do a deep dive into these things.

    But using the Simulink test, I can create the test cases. I can go to test cases and into suits, execute everything as batch or individual, look at the summary, analyze the result, and then also generate a report. At this time, let me go to a short video. So as I said, we saw this earlier. This is the Requirements Editor. We're just showing all of my requirements. Now I have linked few of my requirements to the test cases. Let us look at how this works with the Test Manager.

    So I can see under this links area that I have linked this to a test case called So I can go digitally from this hyperlink directly to that test case this is where I am. Let's pause over here to understand. So this is my complete test suit. I can make a hierarchy. I can have different test made. I can define those things. And this is my area where I actually design my-- how this test should go on, or a particular test.

    So this is the type of the test. So this gives a simulation kind of test. The other kinds of tests are baseline tests and equivalence test. We'll look at some of this later on. This is the Enable or Disable check mark. So I can run everything as a suit and then disable few things, and only run the test that I'm interested in. I can write certain tags. Say whether this is a functional test case, let's say-- and things like that. And then I can write a textual description. And under requirements is where I've linked my requirement. So we've not expanded this, but this is the place where I've linked my requirement to this particular test case. So from-- if I expand this, I would be able to go back to the particular requirement.

    There is this app called System Under Test. This is where I've given up my model. And on that model, I've given up which test harness-- or in this case, or for us to understand, which test vector or scenarios I'm trying to test on this harness-- on this model. So I've made a test harness, we can open this harness, we can see the contents of it. It can have one test case, it can have multiple test cases, it can have one test scenario, and that is what I'm trying to link to a requirement. There are some more options, but just-- let's just move forward. Look at this.

    As I said, I can open up my test harness. So this is what a harness looks like. So if you see-- this is the component under test which you see from the corners that it's a model reference block. It's referring to our model, or the component under test. And at the import, is trying to get all the inputs from the signal builder block. So I've designed my inputs, and I can-- there could be different ways I can probably import my inputs from an Excel sheet, make a signal builder, or simulate block out of it. Or I can write my sequence of tests. We'll see a few things like that in the test sequence block, and different ways I can provide these inputs.

    And then I've provided a placeholder of how I want to test the outputs. Do I want to just observe them? Do I want to write off clear verification criteria that says whether the test is passed or failed? So those can be done. We'll also look at this for a further project, how things look like. But to continue, this is what a test harness is, and now this is linked, or it's trying to test this particular model. When I run this, I can get a complete pass/fail criteria. I can analyze the results.

    So not only, I would say, I can write some verification criteria, I can do a detailed verification temporal analysis as well. So if you see this, the test case that I've made, it sounds or reads in a very intuitive English language. So it says, at any point in time, if this statement data, then this number, this complete statement becomes true, then with a delay between our time interval-- 0.12 seconds and 0.2 seconds-- high temperature fault, which is some output that we want to observe, must be true.

    So whenever this statement becomes true, only between this interval, this outage should be true. If that happens, it's a pass. Otherwise it's a fail. Now it also shows the expected behavior. So it says that whenever this trigger happens, which is the statement, then after this delay-- 0.1 second delay-- and this interval-- between 0.1 seconds 0.2 seconds-- I want my response-- a high temperature for it-- to become true or false. But what has happened is it became true after that. So it also gives an explanation, and it also gives me results of what has happened. And this can be expanded to go well into detail to understand the issues. So let's look at this.

    So different signals of interest I can visualize and see what has happened, and then I can go and get these insights and correct them, and then I can get the research which is complete, or all correct. But the other thing I want to bring at this point is these blue areas. These are the coverage that I got on my controller from these test spaces. The coverage is an important metric that tells me how good is my testing. In my testing the same block again and again-- let's say if it's an art block, and I'm writing test reports like 2 plus 3, and 4 plus 5, and 6 plus 7-- but I'm trying to test the same block again and again.

    But let's say it's an "if" condition, and I'm trying to write test vectors where only if is always true, and there is no else-- I'm not able to trigger or else. Then I'm-- then those test vectors and my quality of testing is not that good. So coverage matrix gives me an idea of how good is my testing. So based on this testing that I did, I got some results of 87%, 90%, 73%. What we can also do is I can-- just give us a minute.

    Can you bring the charger from somewhere? This will die. Can you open the presentation over there? I gave you the link, right? So sorry about this. If it continues, I'll just reset it from a different machine. But let me continue till the battery supports us. So we can look at this under the model. So the red part is what my test filters have not been able to touch, and the green part is what my test filters have been able to touch.

    So this is how I can understand not only that my testing is asked for my requirements, but also whether it's a good quality testing that I'm trying to do. And of course, we can generate reports, and you can simulate reports for the coverage. So this is what we have seen. So from all these workflows, you can do that. And this is what we have seen right now, how from design and requirements, you can do a lot of testing, and a lot of mapping and traceability, and then get a certain amount of confidence. So this is what we call as model verification.

    This is where I trade once again. This is, again, from STEM. So with model based design, they said-- this slide is exactly how our controller would work with the hardware even while the hardware was being developed. So since we were working in closed loop with the simulation models, we really didn't have the need to have the physical hardware with us. After we've had the hardware refinement with-- were easy because the simulation matched what we saw on the scope, and that gave us a tremendous confidence on the design.

    On the next comes up is verifying with the code. Now do you want to generate the code? We are absolutely sure that this design is the golden reference for us? Just give me one more moment. You start. So it's slide 36. I'll share my screen from the laptop. Just give me a moment. Sorry for that. So we were able to figure this out, so let's continue.

    So yeah, at this point in time, we want to-- we have absolutely made sure that this model is my golden reference. And I want to-- I'm confident on generating the code. I want to generate this code where I can do also a rapid controlled prototyping. We'll see what this means. I can do my hardware loop testing, and then I want to test my design with formal methods. So definitely the first step is simulate the code. So just to show you how that looks like, let's go to the ArtLab.

    For the sake of time, I might not be generating the code in this case. But let's try to see. OK. So let me show you a few things over here. So I went to the code perspective, and I see a lot of options over here. So we have what is called as a Quick Start Guide. So when I opened the Quick Start Guide, this is where I can start to make a draft code of my system. So this is like a seven step guide.

    So I can go next-- I can say that I want to generate the code for my model, with this closed loop controlled voltage using Quickstart. So this is the name of the model. And I can see a more extended the SQL, which is a single instance. And it asks me certain questions about my model, tries to capture those answers. It sees what's the rate at which this works, and how many sample rates are present in my system that would cut that information.

    Then once I get that, this is where I can define the word sizes, or in other ways, I can say for what end target I want to generate this code for. Because when I generate this code, and if I want to flash it on to, or executed onto a target, which, let's say, 16 bit, or 32-bit, or 64-bit, I need to tell to the tool that what's the different sizes of keywords the compiler would be able to understand for that target. So what does the integer mean for that target? So I can get a complete list of different targets just supported. For this, let's say device C2000, and it takes those values.

    Let me do a next. I can say at a very high level, what is my objective. So I can say I want to do execution efficiency. And when I press Next, it gives me a lot of values. The old values in the configuration parameters. So there are a lot of options over there, but based on my objectives that is just selected-- or suggested we start a new values, and it's showing me certain old values. I'm not going to do Next, but when I do Next, it will generate the code. I just minimize. I already have the code with me.

    OK, so sorry to restart. So the code bent, but then in that case, let us rest next and come back to this in a moment. So let's go to our presentation. We'll come back to that. This is also another example where I can actually target it to a hardware device and see my output. So this is, in this case, plugged into a DI board.

    And I can-- so this is the model which we saw yesterday that we were developing. And now with the help of code generation-- automatic code generation, the application software can be put and flash on the target, and capture the results from the target in real time. So this is called as a concept or a modeling So when I do that, whatever is happening on the target, I can see the outputs from the target and look at that in my scope.

    So you can generate an NCC code, which is independent of a target, or you can generate a target specific code. And if you have a target, you can also-- if it's a supported target, you can flash it, make it work on the target. OK, so let's go next. So we have seen this. Let's go next. Now the next concept comes up is hardware in the loop. So we, as power electronics engineers, are very familiar with this term. But what I want to do is-- I understand my an environment or a component that I want to control, might be either very costly, expensive, or might be unsafe.

    So if I'm trying to switch a very high voltage, then I want-- instead of playing with the voltage right away file my testing, I want to emulate that. So this is nothing but I'm trying to emulate my environment, or my plant, in the closed loop. And what I'm doing is I'm putting the controls algorithm on my embedded device. And then I want to work in closed loop and connect member device to a plant simulator. So plant simulator is nothing but we call as a real time machine.

    So Speedgoat is one of our partners who provides such high computing devices can simulate a plant. And this plant was developed in Simulink, the same plant you can generate the code for this machine or a real time system. And for the application software, which you wanted to go to the controller, you can send the code using a machine.

    So this is what I explained. The plant, which was made in Simulink, goes through the real time machine, and the controller goes into the embedded target. And when they interact with the physical aisles of the machine, this is called as a hardware loop. Again, just to bring out, so when we were testing all of this from the requirements, we can get the traceability of the requirements at each and every step. So we saw how we can get it onto the model components. We see this blocks, we can get it on the test vectors, we know what tests are responsible, or are linked to what requirements.

    And then we can also get it in the generated code as comments. So we can get all our requirements as comments in the generated code. So from requirements to the generated code, we have a digital end to end traceability. We can also generate reports. So this is called a traceability matrix, which gives each and every line number of the code requirement number and block to give us this confidence.

    Again, I use a study from SuperGrid Institute. So the transition from design model to real time software was very fast thanks to the complete compatibility between Matlab and Simulink and the Speedgoat. The Speedgoat target machine provides fast and robust control of these switching semiconductors in a difficult electromagnetic environment. So definitely there are scenarios where we want to have the Simulink plant simulating device so that we don't need to work with the real hardware because if there is something wrong, we might be either damaging it, or might cause some more issues.

    So the next thing that comes up is I want to verify this whole system. So even at the system level, I want to do my hardware in loop, but I also want to make sure that this toolchain by MathWorks-- the Embedded Coder-- in the process of generating code is not injecting any bugs. How do I get confidence on that? So to get the confidence on that, what I would do is I already have my descriptors ready, right? I would give the same descriptors to the software that I've generated the C code, and try to see the output. Is that output same as the output of my model? In that case, I would be able to say that an equivalence has been established. And this is what we call as software loop. If I do it with an embedded target, we call it a processor loop.

    So again, we can do that using the S manager and the test harness. We can use the same test harness, we can use the same test vectors, we just need to configure it in a different way. So what we'll do the generated code, instead of giving it to the model, I'll give the same inputs to the generated code and then try to verify it and try to see. So this is my software loop. And if I want to do a processor loop, or an FPGA loop, that means I want to put the algorithm under the target. And then give the same inputs to their target, collect the output from the target, and then do the verification and see whether it's equal into model or not.

    So as I said to do this, we will need to make a equivalence kind of a test case, which would show two simulations. So I'm using the same test cases. I can go to the simulation one. So sim one, same harness, no I'll-- I'll try to change one more setting over there. This is what explains whether my testing should be in software loop or in processor loop. So this will use the same test harness, same test vectors given to the model in simulation one. And for simulation two, it will do either to C code, or the executable on the processor, and then collect it, and then compare, and we can see the output.

    So when you see the output, it has plotted both the outputs since they were same, it's plotted both over each other. And what it did-- so visually, we might not be able to see-- it then took a difference, also. We saw that difference of these outputs of 0 two out, so that means our criteria passed. And that there is no defect or bug that was injected either by the code generator, and even by the compiler for that particular target because the object code is being compiled again, a software comes into the picture to compile that and make it run on the hardware.

    So now I know that this is absolutely reliable. This is as per my requirements. And again, at the system level, I can do-- I can put the plant model in the Speedgoat machine, put the controller algorithm in the embedded target, and then make it talk to each other with the physical I/O cards and physical signals, and then see the output. So just to show you an example, this is how the hardware in loop simulation looks like for our product today. So all the outputs that you see are actually the outputs coming in by the interaction of the code, which is executing on the processor. And the plant, which is being executed out of the Streetwork machine.

    Can see-- we can make different changes, and we can see the results. So you have seen what is hardware in loop and processor and loop. Again, to give a story from ABB-- so they say with model based design, our developers' productivity was easily increased tenfold. Simulation and code generation enabled us to turn changes around quickly and eliminate human errors in the coding. So these are some very good statements that if we can adopt, we would want to have or be confident and say these things for our products also.

    Now the code that we want to put on the end hardware might not only be coming from the models, and might not only be an ordinary code. There would be a component of hand written code that would go. To do that, we can use Polyspace to check for the correctness of this code. So how poly space helps us is just to see this snippet of code. If we look at this particular line, which is trying to calculate x upon, or x divided by, or difference of x and y.

    If you see what can go wrong in this, there are a number of things. So there could be a condition where we might have forgotten to initialize x and y, or maybe both. There could be a condition where this difference might create an overflow. There could be a condition where there might be equal, and they may create a 0 and-- a division by 0 would occur. So there could be different scenarios in the combination of our auto generated code and hand written code where such defects might go into the field and proved to be catastrophic again because this is what we call as the runtime errors.

    This is where Polyspace helps us. So Polyspace, in the nutshell, helps us to make robust software, helps us to improve the software quality. Because we can do a lot of detection of the bugs beforehand, we can prove that it is safe and sound, and we can comply to the safety standards. So it can find defects like data flow issues, medical issues, concurrency issues, and it can help you comply to the standards like MISRA and other coding guidelines-- CERSI-- and different defects like division by 0, dead code, illegal pointer deferencing.

    So certain things can be proved. And this is all done mathematically because this is a formal method-based So it understands the algorithm of the code mathematically and tries to prove for the issue. And as I said, it can help us comply to different standards. So since it's a mathematical engine, it also tries to prove the absence. So it doesn't just say that the tool is not able to find the issue. If it says that there is no issue, it's proving the absence of error because that's a mathematical proof it can-- it tries to understand.

    So this is how the reports-- different reports from the Polyspace tools would look like. This is just a few snippets, let's say, on the guidelines, which you can see how much of my code is compliant to that. But there are different dashboards if you are interested in this. This answers the question, too. Keep answering that, and also let us know, we'll get in touch with you.

    So in all, in nutshell, if I have to bring out the benefits model that's designed specifically for power electronics control, I can say I can do all of this, read the requirements traceability, and for the product that I'll get, I'll be 100% confident that it is as per my design intention, and it's not outside that. So three key takeaways from today's session, we can use the graphical programming language because it is intuitive. It's powerful to design our algorithms, to design the supervisory logic, to design the controller for power electronics algorithms-- power electronics devices. And then we can use the state of the art toolsets to bring in to facilitate my designing and verification. And then I can work with different teams who are interested in different areas.

    And then in closed loop, try to see whether my controller works, and in what I was thinking, and what I was intending to do. And the whole reason we want to adopt this is to find the errors very early, do my verification very early, and then cut down the cost of development and of production, and to deliver a good quality software. You can definitely visit and join the community online on power electronics. Please share these slides.

    I hope there was no question. If you have anything, please feel free to reach out to me. And we do help-- we do have a lot of trainings for you that could be self-paced and could be instructor led. And this is where I want to probably leave you right now, where you can go from different areas that you are interested in. So if you're interested in, let's say, modeling and simulation, so these colored areas would be something that you would find useful to get started. So it doesn't need to be that you need to go into all areas, but as in different areas you feel are important for your workflows, we do have a lot of advanced trainings. And with an instructor learning on video, or on the projects, it shows different capabilities of the tool for that workflow.

    Feel free to reach out to us to get to know more. I would like to say goodbye to everyone. Thank you for being today with me throughout the journey, and thank you for listening with patience. Thank you for asking the questions. I hope this session was helpful, and I hope everybody got to learn something new, or at least my three takeaways that you can design using graphical language, you can generate code, and then you can do your verification early in the process. And there are any number of workflows, any number of features tools to help you do that. If you want any help in that regard, feel free to reach out to the point of contact at MathWorks from whichever leads you are joining. Thank you, everyone. Have a nice day, guys. Bye.

    Related Products

    View more related videos