Skip to content
MathWorks - Mobile View
  • Sign In to Your MathWorks AccountSign In to Your MathWorks Account
  • Access your MathWorks Account
    • My Account
    • My Community Profile
    • Link License
    • Sign Out
  • Products
  • Solutions
  • Academia
  • Support
  • Community
  • Events
  • Get MATLAB
MathWorks
  • Products
  • Solutions
  • Academia
  • Support
  • Community
  • Events
  • Get MATLAB
  • Sign In to Your MathWorks AccountSign In to Your MathWorks Account
  • Access your MathWorks Account
    • My Account
    • My Community Profile
    • Link License
    • Sign Out

Videos and Webinars

  • MathWorks
  • Videos
  • Videos Home
  • Search
  • Videos Home
  • Search
  • Contact sales
  • Trial software
5:10 Video length is 5:10.
  • Description
  • Full Transcript
  • Related Resources

Generate Generic C/C++ Code for Deep Learning Networks in Simulink

As of R2021a, you can use Simulink Coder and Embedded Coder to generate generic ANSI/ISO-compliant C and C++ code, free from dependence on third-party deep learning libraries, for Simulink models containing deep learning networks. Incorporate convolutional and recurrent neural networks into Simulink with blocks from the Deep Learning Toolbox, including the Stateful Predict and Stateful Classify blocks. Apply pre- and post-processing to time-series data for use within Simulink models. Verify the numerical equivalence of the generated code with software-in-the-loop (SIL) and processor-in-the-loop (PIL) testing support from Embedded Coder. Deploy the generated code to any processor that can compile C and C++, including ARM Cortex-M processors, digital signal processors, microcontrollers, and more. For example, deploy a long short-term memory (LSTM) network to an STM32F7 Discovery Board with the Embedded Coder Hardware Support Package for STMicroelectronics Discovery Boards.

As of R2021a, you can now use Simulink Coder and Embedded Coder to generate generic C and C++ code from Simulink models containing deep neural networks. The generic C and C++ code does not depend on third-party libraries and, as a result, can be deployed to any embedded processor that compiles C and C++ code, including ARM Cortex-M series processors, DSPs, and microcontrollers from a range of device vendors.

Generic C and C++ code can be generated for both convolutional and recurrent deep learning neural networks in Simulink. You can incorporate network objects into your Simulink models with blocks from the deep learning toolbox, including the Stateful Classify and Stateful Predict blocks introduced in R2021a or with the MATLAB function block. A variety of networks and layers are supported for generic C and C++ code generation. The full list can be found in our documentation.

Now let's deploy a deep learning network from Simulink to an ARM Cortex-M processor. In this example, we'll use an LSTM network to predict the remaining useful life of turbofan engines. The pre-trained network accepts sequence data from 17 engine sensors and outputs a prediction of the engines' remaining useful life, measured in cycles. The time series input data is fed to the model with a firm workspace block and is then sent to the predict neural network block.

Using the Deep Network Designer app, we can take a deeper look into the network and see that it contains six layers, including an LSTM layer. Let's simulate the model in Simulink. From the simulation, we can see that the pre-trained neural network forecast the remaining useful life of the turbofan engines in each of the observations shown here relatively well. The predicted values closely follow the actual values, with a root mean square error of 20.37.

Now let's generate generic C code from the model and deploy it to an ARM Cortex-M processor. In configuration parameters, we'll select the STM32F746G-Discovery board as our hardware board. Under Code Generation, we've set the system target file to use Embedded Coder and the target language to C. We'll use the new tools for ARM embedded processors for our toolchain. Finally, we'll ensure the deep learning target library is set to none to remove any dependencies on third-party libraries. With these settings in place, let's generate code for the subsystem containing the neural network.

In the code-generation report, we can see the files generated do not include any external deep learning libraries. Let's search for the model step function in the generated code. Looking at the step function, we can see that it contains a predict method. Inside the predict method, we can see a set of weights and biases defined for use in the neural network. Scrolling down, we can see the code used to calculate the outputs of the neural net for the time series inputs passed in at each time step.

With the code successfully generated, let's deploy it to our target. We'll use processor to loop execution to numerically verify the generated code's output. In the second model, we've placed the same predict block which we saw previously inside of a model reference and set its simulation mode to PIL. In the SIL/PIL manager app, we'll set our simulation mode to SIL/PIL only to collect results from running the generated code with PIL execution and choose model blocks in SIL/PIL mode as our system under test.

Post-simulation, we'll compare the prediction outputs from running the generic C code on hardware to those from Simulink. Now let's run the generated code. Code is first generated for the model reference block in PIL mode. A connection is established with our Cortex-M Discovery board. And finally, the source code is built and downloaded to it.

The left set of plots show the results from running the generated code on hardware closely follows the actual test data. To verify the code's numerical accuracy, we created a second set of plots on the right to show the differences in predictions while running the simulation in Simulink and in PIL mode on our target hardware. All 10 observations show the deviation to be within a tolerance band of about 500,000ths, a negligible effect on the neural network's accuracy.

Overall, we've seen how it's now possible to generate generic C and C++ code from deep learning networks in Simulink that do not depend on optimization libraries, effectively extending its use to nearly any embedded processor. Additionally, with processor in the loop testing, we verified that the output from the generated code matches that from the Simulink simulation. To learn more about deep learning code generation and MATLAB in Simulink, please click one of the links below or refer to our documentation.

Related Products

  • Deep Learning Toolbox
  • Embedded Coder
  • MATLAB Coder
  • Simulink
  • Simulink Coder

Generate generic C/C++ code for sequence-to-sequence regression that uses deep learning

Bridging Wireless Communications Design and Testing with MATLAB

Read white paper
Related Information
Related Information
Generate generic C/C++ code for deep learning networks

Feedback

Featured Product

Deep Learning Toolbox

  • Request Trial
  • Get Pricing

Up Next:

24:56
Optimal Neural Network for Automotive Product Development

Related Videos:

19:12
An Automated Generic Workflow for Code Coverage with...
45:06
TI Hercules MCU Optimized Code from Simulink for IEC 61508...
4:34
Generating C Code from MATLAB Code
2:57
Generate Executable for Prototyping Using MATLAB Coder

View more related videos

MathWorks - Domain Selector

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (简体中文)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
    • 简体中文Chinese
    • English
  • 日本Japanese (日本語)
  • 한국Korean (한국어)

Contact your local office

  • Contact sales
  • Trial software

MathWorks

Accelerating the pace of engineering and science

MathWorks is the leading developer of mathematical computing software for engineers and scientists.

Discover…

Explore Products

  • MATLAB
  • Simulink
  • Student Software
  • Hardware Support
  • File Exchange

Try or Buy

  • Downloads
  • Trial Software
  • Contact Sales
  • Pricing and Licensing
  • How to Buy

Learn to Use

  • Documentation
  • Tutorials
  • Examples
  • Videos and Webinars
  • Training

Get Support

  • Installation Help
  • MATLAB Answers
  • Consulting
  • License Center
  • Contact Support

About MathWorks

  • Careers
  • Newsroom
  • Social Mission
  • Customer Stories
  • About MathWorks
  • Select a Web Site United States
  • Trust Center
  • Trademarks
  • Privacy Policy
  • Preventing Piracy
  • Application Status

© 1994-2022 The MathWorks, Inc.

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
  • RSS

Join the conversation