This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Install and Setup Prerequisites for NVIDIA Boards

Target Requirements

Hardware

GPU Coder™ Support Package for NVIDIA® GPUs supports the following development platforms:

  • NVIDIA Jetson AGX Xavier platform.

  • NVIDIA Jetson TX2 embedded platform.

  • NVIDIA Jetson TX1 embedded platform.

  • NVIDIA DRIVE PX2 platform.

The GPU Coder Support Package for NVIDIA GPUs uses an SSH connection over TCP/IP to execute commands while building and running the generated CUDA® code on the DRIVE or Jetson platforms. Connect the target platform to the same network as the host computer. Alternatively, you can use an Ethernet crossover cable to connect the board directly to the host computer.

Software

Use the JetPack or the DriveInstall software to install the OS image, developer tools, and the libraries required for developing applications on the Jetson or DRIVE platforms. You can use the Component Manager in the JetPack or the DriveInstall software to select the components to be installed on the target hardware. For installation instructions, refer to the NVIDIA board documentation. At a minimum, you must install:

  • CUDA Toolkit

  • cuDNN Library

  • TensorRT Library

  • OpenCV Library

Install the Simple DirectMedia Layer (SDL v1.2) library, V4L2 library, and V4L2 utilities for running the webcam examples.

Environment Variable on the Target

GPU Coder Support Package for NVIDIA GPUs uses environment variables to locate the necessary tools, compilers, and libraries required for code generation. Ensure that the following environment variables are set.

Variable NameDefault ValueDescription
PATH/usr/local/cuda/bin

Path to the CUDA toolkit executable on the Jetson or DRIVE platform.

Ensure that the path to the CUDA toolkit executables should be accessible in non-interactive SSH logins.

For example, this can be done by adding the following export command at the very beginning of the $HOME/.bashrc file for bash profiles.

export PATH=$PATH: /usr/local/cuda/bin

LD_LIBRARY_PATH/usr/local/cuda/lib64

Path to the CUDA library folder on the Jetson or DRIVE platform.

Input Devices

A webcam connected to the USB host port of the target hardware.

Development Host Requirements

This support package requires the base product, GPU Coder. GPU Coder requires the following MathWorks® and third-party products.

MathWorks Products

  • MATLAB® (required).

  • MATLAB Coder™ (required).

  • Parallel Computing Toolbox™ (required).

  • Deep Learning Toolbox™ (required for deep learning).

  • Image Processing Toolbox™ (recommended).

  • Embedded Coder® (recommended).

  • Simulink® (recommended).

Third-Party Products

  • NVIDIA GPU enabled for CUDA.

  • CUDA toolkit and driver.

  • C/C++ Compiler.

  • CUDA Deep Neural Network library (cuDNN).

  • NVIDIA TensorRT – high performance deep learning inference optimizer and runtime library.

For information on the version numbers for the compiler tools and libraries, see Installing Prerequisite Products (GPU Coder). For information on setting up the environment variables on the host development computer, see Setting Up the Prerequisite Products (GPU Coder).

Note

It is recommended to use the same versions of cuDNN and TensorRT libraries on the target board and the host computer.

Related Topics