Main Content

Code Generation

Generate C/C++, CUDA®, or HDL code and deploy deep learning networks

Generate code for pretrained deep neural networks. You can accelerate the simulation of your algorithms in MATLAB® or Simulink® by using different execution environments. By using support packages, you can also generate and deploy C/C++, CUDA, and HDL code on target hardware.

Use Deep Learning Toolbox™ together with the Deep Learning Toolbox Model Quantization Library support package to reduce the memory footprint and computational requirements of a deep neural network by quantizing the weights, biases, and activations of layers to reduced precision scaled integer data types. You can then generate C/C++, CUDA, or HDL code from these quantized networks.

Use MATLAB Coder™ or Simulink Coder together with Deep Learning Toolbox to generate MEX or standalone CPU code that runs on desktop or embedded targets. You can deploy the generated standalone code that uses the Intel® MKL-DNN library or the ARM® Compute library. Alternatively, you can generate generic CPU code that does not call third-party library functions.

Use GPU Coder™ together with Deep Learning Toolbox to generate CUDA MEX or standalone CUDA code that runs on desktop or embedded targets. You can deploy the generated standalone CUDA code that uses the CUDA deep neural network library (cuDNN), the TensorRT™ high performance inference library, or the ARM Compute library for Mali GPU.

Use Deep Learning HDL Toolbox™ together with Deep Learning Toolbox to generate HDL code for pretrained networks. You can deploy the generated HDL code on Intel and Xilinx® FPGA and SoC devices.

Workflow diagram for code generation from deep neural networks.

Categories

Related Information