GPU Coder™ Support Package for NVIDIA® GPUs supports the following development platforms:
NVIDIA Jetson AGX Xavier platform.
NVIDIA Jetson TX2 embedded platform.
NVIDIA Jetson TX1 embedded platform.
NVIDIA DRIVE PX2 platform.
The GPU Coder Support Package for NVIDIA GPUs uses an SSH connection over TCP/IP to execute commands while building and running the generated CUDA® code on the DRIVE or Jetson platforms. Connect the target platform to the same network as the host computer. Alternatively, you can use an Ethernet crossover cable to connect the board directly to the host computer.
Use the JetPack or the DriveInstall
software to install the OS image, developer tools, and the libraries required
for developing applications on the Jetson or DRIVE platforms. You can use the
Component Manager in the
DriveInstall software to select the components to be
installed on the target hardware. For installation instructions, refer to the
NVIDIA board documentation. At a minimum, you must install:
Install the Simple DirectMedia Layer (SDL v1.2) library, V4L2 library, and V4L2 utilities for running the webcam examples.
GPU Coder Support Package for NVIDIA GPUs uses environment variables to locate the necessary tools, compilers, and libraries required for code generation. Ensure that the following environment variables are set.
|Variable Name||Default Value||Description|
Path to the CUDA toolkit executable on the Jetson or DRIVE platform.
Ensure that the path to the CUDA toolkit executables should be accessible in non-interactive SSH logins.
this can be done by adding the following
Path to the CUDA library folder on the Jetson or DRIVE platform.
A webcam connected to the USB host port of the target hardware.
This support package requires the base product, GPU Coder. GPU Coder requires the following MathWorks® and third-party products.
MATLAB Coder™ (required).
Parallel Computing Toolbox™ (required).
Deep Learning Toolbox™ (required for deep learning).
Image Processing Toolbox™ (recommended).
Embedded Coder® (recommended).
NVIDIA GPU enabled for CUDA.
CUDA toolkit and driver.
CUDA Deep Neural Network library (cuDNN).
NVIDIA TensorRT – high performance deep learning inference optimizer and runtime library.
For information on the version numbers for the compiler tools and libraries, see Installing Prerequisite Products (GPU Coder). For information on setting up the environment variables on the host development computer, see Setting Up the Prerequisite Products (GPU Coder).
It is recommended to use the same versions of cuDNN and TensorRT libraries on the target board and the host computer.