Main Content

Target library

Description

Specify the target deep learning library used during code generation.

Category: Code Generation > Interface

Settings

Default: None

MKL-DNN

Use this option for generating code that uses the Intel® Math Kernel Library for Deep Neural Networks (Intel MKL-DNN).

ARM Compute

Use this option for generating code that uses the ARM® Compute Library.

cuDNN

Use this option for generating code that uses the CUDA® Deep Neural Network library (cuDNN).

TensorRT

Use this option for generating code that takes advantage of the NVIDIA® TensorRT – high performance deep learning inference optimizer and run-time library.

Dependencies

  • To enable this parameter, select C++ for the Language and grt.tlc or ert.tlc for System target file.

  • MKL-DNN or ARM Compute is available when GPU acceleration on the Code Generation pane is disabled.

  • cuDNN or TensorRT requires a GPU Coder™ license.

  • cuDNN or TensorRT is available when GPU acceleration on the Code Generation pane is enabled.

Command-Line Information

Parameter: DLTargetLib
Type: character vector
Value: 'None' | 'MKL-DNN' | 'ARM Compute' | 'cuDNN' | 'TensorRT'
Default: 'None'

Related Topics