Supported Networks, Layers, Boards, and Tools
Supported Pretrained Networks
Deep Learning HDL Toolbox™ supports code generation for series convolutional neural networks (CNNs or ConvNets). You can generate code for any trained CNN whose computational layers are supported for code generation. For a full list, see Supported Layers. You can use one of the pretrained networks listed in the table to generate code for your target Intel® or Xilinx® FPGA boards.
Network | Network Description | Type | Single Data Type (with Shipping Bitstreams) | INT8 data type (with Shipping Bitstreams) | Application Area | ||||
ZCU102 | ZC706 | Arria10 SoC | ZCU102 | ZC706 | Arria10 SoC | Classification | |||
AlexNet | AlexNet convolutional neural network. | Series Network | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | Classification |
LogoNet | Logo recognition network (LogoNet) is a MATLAB® developed logo identification network. For more information, see Logo Recognition Network. | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Classification |
DigitsNet | Digit classification network. See Create Simple Deep Learning Neural Network for Classification. | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Classification |
Lane detection | LaneNet convolutional neural network. For more information, see Deploy Transfer Learning Network for Lane Detection. | Series Network | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | Classification |
VGG-16 | VGG-16 convolutional neural network. For the pretrained VGG-16 model, see
| Series Network | No. Network exceeds PL DDR memory size. | No. Network exceeds FC module memory size. | Yes | Yes | No. Network exceeds FC module memory size. | Yes | Classification |
VGG-19 | VGG-19 convolutional neural network. For the pretrained VGG-19 model, see
| Series Network | No. Network exceeds PL DDR memory size. | No. Network exceeds FC module memory size. | Yes | Yes | No. Network exceeds FC module memory size. | Yes | Classification |
Darknet-19 | Darknet-19 convolutional neural network. For the pretrained darknet-19
model, see | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Classification |
Radar Classification | Convolutional neural network that uses micro-Doppler signatures to identify and classify the object. For more information, see Bicyclist and Pedestrian Classification by Using FPGA. | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Classification and Software Defined Radio (SDR) |
Defect Detection snet_defnet | snet_defnet is a custom AlexNet network used to identify and
classify defects. For more information, see Defect Detection. | Series Network | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | Classification |
Defect Detection snet_blemdetnet | snet_blemdetnet is a custom convolutional neural network
used to identify and classify defects. For more information, see Defect Detection. | Series Network | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | Classification |
DarkNet-53 | Darknet-53 convolutional neural network. For the pretrained DarkNet-53 model,
see darknet53 . | Directed acyclic graph (DAG) network based | Yes | Yes | Yes | Yes | Yes | No | Classification |
ResNet-18 | ResNet-18 convolutional neural network. For the pretrained ResNet-18 model, see
resnet18 . | Directed acyclic graph (DAG) network based | Yes | Yes | Yes | Yes | Yes | Yes | Classification |
ResNet-50 | ResNet-50 convolutional neural network. For the pretrained ResNet-50 model, see
resnet50 . | Directed acyclic graph (DAG) network based | No. Network exceeds PL DDR memory size. | No. Network exceeds PL DDR memory size. | Yes | Yes | Yes | Yes | Classification |
ResNet-based YOLO v2 | You only look once (YOLO) is an object detector that decodes the predictions from a convolutional neural network and generates bounding boxes around the objects. For more information, see Vehicle Detection Using ResNet-18 Based YOLO v2 Deployed to FPGA. | Directed acyclic graph (DAG) network based | Yes | Yes | Yes | Yes | Yes | Yes | Object detection |
MobileNetV2 | MobileNet-v2 convolutional neural network. For the pretrained MobileNet-v2
model, see mobilenetv2 . | Directed acyclic graph (DAG) network based | Yes | Yes | Yes | Yes | Yes | Yes | Classification |
GoogLeNet | GoogLeNet convolutional neural network. For the pretrained GoogLeNet model, see
googlenet . | Directed acyclic graph (DAG) network based | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | No. To use the bitstream, enable the LRNBlockGeneration
property of the processor configuration for the bitstream and generate the bitstream
again. | Classification |
PoseNet | Human pose estimation network. | Directed acyclic graph (DAG) network based | Yes. | Yes | Yes | Yes | Yes | Yes | Segmentation |
U-Net | U-Net convolutional neural network designed for semantic image segmentation. | Directed acyclic graph (DAG) network based | No. PL DDR memory oversize. | No. PL DDR memory oversize. | No. PL DDR memory oversize. | No. PL DDR memory oversize. | No. PL DDR memory oversize. | Yes | Segmentation |
SqueezeNet-based YOLO v3 | The you-only-look-once (YOLO) v3 object detector is a multi-scale object detection network that uses a feature extraction network and multiple detection heads to make predictions at multiple scales. | dlnetwork object | Yes | Yes | No | No | No | No | Object detection |
Sequence-to-sequence classification | Classify each time step of sequence data using a long short-term memory (LSTM) network. See Run Sequence-to-Sequence Classification on FPGAs by Using Deep Learning HDL Toolbox. | Long short-term memory (LSTM) network | Yes | Yes | Yes | No | No | No | Sequence data classification |
Time series forecasting | Forecast time series data using a long short-term memory (LSTM) network. See Run Sequence Forecasting on FPGA by Using Deep Learning HDL Toolbox. | Long short-term memory (LSTM) network | Yes | Yes | Yes | No | No | No | Forecast time series data |
Word-by-word text generation | Generate text word-by-word by using a long short-term memory (LSTM) network. See Generate Word-By-Word Text on FPGAs by Using Deep Learning HDL Toolbox. | Long short-term memory (LSTM) network | Yes | Yes | Yes | No | No | No | Sequence data prediction |
YAMNet | Pretrained audio classification network. See yamnet (Audio Toolbox) and Deploy YAMNet Networks to FPGAs with and without Cross-Layer Equalization. | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Audio data classification |
Semantic Segmentation Using Dilated Convolutions | Semantic segmentation using dilated convolution layer to increase coverage area without increasing the number of computational parameters. See Deploy Semantic Segmentation Network Using Dilated Convolutions on FPGA. | Series Network | Yes | Yes | Yes | Yes | Yes | Yes | Segmentation |
Time series forecasting | Forecast time series data using a long short-term memory (LSTM) network. See Run Sequence Forecasting Using a GRU Layer on an FPGA. | Gated recurrent unit (GRU) layer network | Yes | Yes | Yes | No | No | No | Forecast time series data |
Pruned image classification network | Pruned image classification network. See Deploy Image Recognition Network on FPGA with and Without Pruning. | Series network | Yes | Yes | Yes | Yes | Yes | Yes | Image classification |
Very-deep super-resolution (VDSR) network | Create high resolution images from low-resolution images by using VDSR networks. See Increase Image Resolution Using VDSR Network Running on FPGA. | Series network | Yes | Yes | Yes | Yes | Yes | Yes | Image processing |
YOLO v4 tiny | The you only look once version 4 (YOLO v4) object detection network is a one-stage object detection network and is composed of three parts: backbone, neck, and head. See Detect Objects Using YOLOv4-tiny Network Deployed to FPGA. | dlnetwork object | Yes | Yes | Yes | Yes | Yes | Yes | Object detection |
Supported Layers
Deep Learning HDL Toolbox supports the layers listed in these tables.
Input Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Description and Limitations | INT8 Compatible |
SW | An image input layer inputs 2-D images to a network and applies data
normalization. The normalization options | Yes. Runs as single datatype in SW. | |
SW | A feature input layer inputs feature data to a network and applies data normalization. | Yes | |
SW | A sequence input layer inputs sequence data to a network. | Yes | |
wordEmbeddingLayer (Text Analytics Toolbox) | SW | A word embedding layer maps word indices to vectors. | No |
Convolution and Fully Connected Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
HW | Convolution (Conv) | A 2-D convolutional layer applies sliding convolutional filters to the input. When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Convolution (Conv) | A 2-D grouped convolutional layer separates the input channels into groups and applies sliding convolutional filters. Use grouped convolutional layers for channel-wise separable (also known as depth-wise separable) convolution. Code generation is now supported for a 2-D grouped
convolution layer that has the When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Inherit from input | A 1-D convolutional layer applies sliding convolutional filters to 1-D input. The layer convolves the input by moving the filters along the input and computing the dot product of the weights and the input, then adding a bias term. When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Convolution (Conv) | A transposed 2-D convolution layer upsamples feature maps. When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Fully Connected (FC) | A fully connected layer multiplies the input by a weight matrix, and then adds a bias vector. When generating code for a network using this layer, these limitations apply:
| Yes |
Activation Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
HW | Layer is fused. | A ReLU layer performs a threshold operation to each element of the input where any value less than zero is set to zero. A ReLU layer is supported only when it is preceded by any of these layers:
| Yes | |
HW | Layer is fused. | A leaky ReLU layer performs a threshold operation where any input value less than zero is multiplied by a fixed scalar. A leaky ReLU layer is supported only when it is preceded by any of these layers:
| Yes | |
HW | Layer is fused. | A clipped ReLU layer performs a threshold operation where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling value. A clipped ReLU layer is supported only when it is preceded by any of these layers:
| Yes | |
HW | Inherit from input | A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. | Yes. Runs as single datatype in HW. | |
swishLayer | HW | Inherit from input | A swish layer applies the swish activation function on layer inputs. | No |
dlhdl.layer.mishLayer | HW | Inherit from input | A mish layer applies the mish activation function on layer inputs. | No |
Normalization, Dropout, and Cropping Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
HW | Layer is fused. | A batch normalization layer normalizes each input channel across a mini-batch. A batch normalization layer is supported when preceded by an image input layer, convolution layer, or as a standalone layer. | Yes | |
HW | Convolution (Conv) | A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization. The
| Yes. Runs as single datatype in HW. | |
NoOP on inference | NoOP on inference | A dropout layer randomly sets input elements to zero within a given probability. | Yes | |
| HW | Inherit from input | A 2-D resize layer resizes 2-D input by a scale factor, to a specified height and width, or to the size of a reference input feature map. When generating code for a network using this layer, these limitations apply:
The | Yes |
crop2dLayer | HW | Inherit from input | A 2-D crop layer applies 2-D cropping to the input. When generating code for a network using this layer, these limitations apply:
| Yes |
dlhdl.layer.reshapeLayer | HW | Inherit from input | A reshape layer changes the shape of the layer activation data. | Yes |
dlhdl.layer.sliceLayer | HW | Inherit from input | A slice layer divides the input to the layer into an equal number of groups along the channel dimension of the image. When generating code for a network using this layer, these limitations apply:
| Yes |
upsample2DLayer | HW | Inherit from input | During compiler optimization Deep Learning HDL Toolbox replaces this layer with a resize2-D layer. | Yes |
Pooling and Unpooling Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
HW | Convolution (Conv) | A max pooling layer performs downsampling by dividing the layer input into rectangular pooling regions and computing the maximum of each region. When generating code for a network using this layer, these limitations apply:
| Yes No, when | |
HW | Convolution (Conv) | A max unpooling layer unpools the output of a max pooling layer. | No | |
HW | Convolution (Conv) | An average pooling layer performs downsampling by dividing the layer input into rectangular pooling regions and computing the average values of each region. When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Convolution (Conv) | A global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input. When generating code for a network using this layer, these limitations apply:
| Yes |
Combination Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
HW | Inherit from input. | An addition layer adds inputs from multiple neural network layers element-wise. You can now generated code for this layer with
When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Inherit from input. | A depth concatenation layer takes inputs that have the same height and width and concatenates them along the third dimension (the channel dimension). When generating code for a network using this layer, these limitations apply:
| Yes | |
HW | Inherit from input | A multiplication layer multiplies inputs from multiple neural network layers element-wise. | Yes |
Sequence Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Description and Limitations | INT8 Compatible |
---|---|---|---|
HW | An LSTM layer learns long-term dependencies between time steps in time series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training. When generating code for a network using this layer, these limitations apply:
| No | |
HW | A GRU layer is an RNN layer that learns dependencies between time steps in time series and sequence data. When generating code for a network using this layer, these limitations apply:
| No | |
lstmProjectedLayer | HW | A projected LSTM layer is a type of deep learning layer that enables compression by reducing the number of stored learnable parameters. When generating code for a network using this layer, these limitations apply:
| No |
gruProjectedLayer | HW | A projected GRU layer is a type of deep learning layer that enables compression by reducing the number of stored learnable parameters. When generating code for a network using this layer, these limitations apply:
| No |
Output Layer
Layer | Layer Type Hardware (HW) or Software(SW) | Description and Limitations | INT8 Compatible |
SW and HW | A softmax layer applies a softmax function to the input. If the softmax layer is implemented in hardware:
| Yes. Runs as single datatype in SW. | |
SW | A classification layer computes the cross-entropy loss for multiclass classification issues that have mutually exclusive classes. | Yes | |
SW | A regression layer computes the half mean squared error loss for regression problems. | Yes | |
HW | A sigmoid layer applies a sigmoid function to the input. The sigmoid layer is implemented in the custom module of the deep learning processor configuration and runs as single datatype in HW. | Yes. Runs as single datatype in HW. |
Keras and ONNX Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
nnet.keras.layer.FlattenCStyleLayer | HW | Layer will be fused | Flatten activations into 1-D layers assuming C-style (row-major) order. A | Yes |
nnet.keras.layer.ZeroPadding2dLayer | HW | Layer will be fused. | Zero padding layer for 2-D input. A
| Yes |
nnet.onnx.layer.FlattenInto2dLayer | HW | Layer will be fused | Flattens a MATLAB 2D image batch in the way ONNX does, producing a 2D output array
with A
| Yes |
nnet.onnx.layer.FlattenLayer | HW | Layer will be fused | Flatten layer for ONNX™ network. A
If the layer
following the
| Yes |
flattenLayer | HW/SW | Layer will be fused | A flatten layer collapses the spatial dimensions of the input into the channel dimension. A flatten layer should be followed by a fully connected layer. Starting in R2024b, you can use a flatten layer as the last layer in a network when you implement it as a SW layer. | Yes |
Custom Layers
Layer | Layer Type Hardware (HW) or Software(SW) | Layer Output Format | Description and Limitations | INT8 Compatible |
---|---|---|---|---|
Custom Layers | HW | Inherit from input | Custom layers, with or without learnable parameters, that you define for your problem. To learn how to define your custom deep learning layers, see Create Deep Learning Processor Configuration for Custom Layers. | No |
Supported Boards
These boards are supported by Deep Learning HDL Toolbox:
Xilinx Zynq®-7000 ZC706
Intel Arria® 10 SoC
Xilinx Zynq UltraScale+™ MPSoC ZCU102
Custom boards. For more information, see Deep Learning Processor IP Core Generation for Custom Board.
Third-Party Synthesis Tools and Version Support
Deep Learning HDL Toolbox has been tested with:
Xilinx Vivado® Design Suite 2023.1
Intel Quartus® Prime Standard 22.1.1
Image Input Layer Normalization Hardware Implementation
To enable hardware implementation of the normalization functions for the image input
layer, set the HardwareNormalization
argument of the
compile
method to auto
or on
. When
HardwareNormalization
is set to auto
, the compile
method looks for the presence of addition and multiplication layers to implement the
normalization function on hardware. The normalization is implemented on hardware by:
Creating a new constant layer, This layer holds the value which is to be subtracted.
Using existing addition and multiplication layers. The layers to be used depends on the normalization function being implemented.
Constant Layer Buffer Content
This table describes the value stored in the constant layer buffer.
Normalization Function | Number of Constants | Constant Layer Buffer Value |
---|---|---|
zerocenter | 1 | - Mean |
zscore | 2 | The first constant value is -Mean . The second constant
value is 1/StandardDeviation |