GPU Coder™ Support Package for NVIDIA® GPUs supports the following development platforms:
NVIDIA Jetson AGX Xavier platform.
NVIDIA Jetson Nano platform.
NVIDIA Jetson TX2 embedded platform.
NVIDIA Jetson TX1 embedded platform.
NVIDIA DRIVE PX2 platform.
The GPU Coder Support Package for NVIDIA GPUs uses an SSH connection over TCP/IP to execute commands while building and running the generated CUDA® code on the DRIVE or Jetson platforms. Connect the target platform to the same network as the host computer. Alternatively, you can use an Ethernet crossover cable to connect the board directly to the host computer.
Use the JetPack or the DriveInstall software to install the OS image, developer
tools, and the libraries required for developing applications on the
Jetson or DRIVE platforms. You can use the Component
Manager
in the JetPack
or the
DriveInstall
software to select the components to
be installed on the target hardware. For installation instructions,
refer to the NVIDIA board documentation. At a minimum, you must install:
CUDA toolkit.
cuDNN library.
TensorRT library.
OpenCV library.
GStreamer library (v1.0 or higher) for deployment of the
videoReader
function.
The GPU Coder Support Package for NVIDIA GPUs has been tested with the following JetPack and DRIVE SDK versions:
Hardware Platform | Software Version |
---|---|
Jetson AGX Xavier, TX2/TX1, Nano | JetPack 4.3 |
DRIVE | DRIVE SDK 5.0.10.3-12606092 |
Install the Simple DirectMedia Layer (SDL v1.2) library, V4L2 library, and V4L2 utilities for running the webcam examples. You must also install the development packages for these libraries.
For example, on Ubuntu, use the apt-get
command to
install these libraries.
sudo apt-get install libsdl1.2-dev sudo apt-get install v4l-utils
GPU Coder Support Package for NVIDIA GPUs uses environment variables to locate the necessary tools, compilers, and libraries required for code generation. Ensure that the following environment variables are set.
Variable Name | Default Value | Description |
---|---|---|
PATH | /usr/local/cuda/bin | Path to the CUDA toolkit executable on the Jetson or DRIVE platform. |
LD_LIBRARY_PATH | /usr/local/cuda/lib64 | Path to the CUDA library folder on the Jetson or DRIVE platform. |
Ensure that the required environment variables are accessible from
non-interactive SSH logins. For example, you can use the
export
command at the beginning of the
$HOME/.bashrc
shell config file to add the environment
variables.
Alternatively, you can set system-wide environment variables in the
/etc/environment
file. You must have
sudo
privileges to edit this file.
A webcam connected to the USB host port of the target hardware.
This support package requires the base product, GPU Coder. GPU Coder requires the following MathWorks® and third-party products.
MATLAB® (required).
MATLAB Coder™ (required).
Parallel Computing Toolbox™ (required).
Simulink® (required for generating code from Simulink models).
Computer Vision Toolbox™ (recommended).
Deep Learning Toolbox™ (required for deep learning).
Embedded Coder® (recommended).
Image Processing Toolbox™ (recommended).
Simulink Coder (required for generating code from Simulink models).
GPU Coder Interface for Deep Learning Libraries support package (required for deep learning).
NVIDIA GPU enabled for CUDA.
CUDA toolkit and driver.
C/C++ Compiler.
CUDA Deep Neural Network library (cuDNN).
NVIDIA TensorRT – high performance deep learning inference optimizer and run-time library.
For information on the version numbers for the compiler tools and libraries, see Installing Prerequisite Products. For information on setting up the environment variables on the host development computer, see Setting Up the Prerequisite Products.
Note
If possible, it is recommended to use the same versions of cuDNN and TensorRT libraries on the target board and the host computer.