This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Design and Deploy Workflow

Computer Vision Toolbox™ Support Package for Xilinx® Zynq®-Based Hardware provides progressive features depending on what products you have installed. The required products for each design goal are listed in the figure. To achieve each goal, you must also have the preceding products.

At a minimum, you must have Simulink® and Computer Vision Toolbox.

For a complete workflow example, see Developing Vision Algorithms for Zynq-Based Hardware.

Live Video Capture

Using this support package, you can capture live video from your Zynq device and import it into Simulink. The video source can be the HDMI input on a camera board, or an on-chip test pattern generator included with the reference design. You can select the color space and resolution of the input frames. The capture resolution must match that of your input camera.

The hardware data path runs at full frame rate from the HDMI input to the HDMI output. The Simulink capture port works at a slower, best-effort rate. A Simulink model captures and processes one frame before it requests the next frame from the board. When the model includes minimal image processing logic, the frame capture rate for YCbCr 4:2:2 video, at 1080p60, is typically about 20 MB/s, or 5 fps.

For an example of live video capture and display in a Simulink model, see Getting Started with Vision Zynq Hardware.

Frame-Based Design

Once you have video frames in Simulink, design frame-based video processing algorithms that operate on the live data input. Use blocks from the Computer Vision Toolbox libraries to develop frame-based, floating-point algorithms. After you are satisfied with the results of your design, in preparation for hardware targeting, convert the algorithm to use fixed-point data types and pixel-streaming video.

To get started with frame-based design, use the template described in Design Frame-Based Algorithms.

Pixel-Streaming Design

When you first develop your vision algorithm, using frame-based video data enables you to develop and debug your algorithm faster than using pixel-streaming data. You can verify your algorithm quickly without the constraints of hardware. However, due to resource constraints, hardware video processing designs operate on pixel-streaming data. Use blocks from the Vision HDL Toolbox™ libraries to build a pixel-streaming algorithm that you can target to the FPGA user logic section of the reference design. These blocks have a standard interface that includes streaming pixel data and a control signal bus. Vision HDL Toolbox also provides blocks to convert between framed and streaming video data.

Your pixel-streaming design can include an interface to external memory for a frame buffer or random read and write access using AXI Master.

To get started with pixel-streaming design, use the template described in Design Pixel-Streaming Algorithms for Hardware Targeting.

FPGA Targeting

Once you have a pixel-streaming model that meets your requirements, you can generate HDL code to represent the design and prototype it on the Zynq device. By running all or part of your code on the hardware, you speed up simulation of your video processing system and can verify its behavior on real hardware. The targeted design must not modify the frame size or color format of the video stream. The reference design expects output data in the same format as the input data. The targeting step maps the external memory interface model to the physical memory interface on the board.

After FPGA targeting, you can capture the live output frames from the FPGA user logic back to Simulink for further processing and analysis. You can also view the output on an HDMI device connected to your board. Using the generated hardware interface model, you can control the video capture options and read and write AXI-Lite ports on the FPGA user logic from Simulink during simulation.

This step requires HDL Coder™ and HDL Coder Support Package for Xilinx Zynq Platform, as well as Xilinx Vivado®.

See Target an FPGA on Zynq Hardware and Models Generated from FPGA Targeting.

ARM Processor Targeting

You can create a model for software targeting using the default FPGA design loaded at setup, or you can customize the FPGA logic and modify the generated software interface model. In either case, use the Video Capture (software interface) block to route the video from the FPGA into the ARM® processor, or to control the data path in the FPGA. You can design an algorithm for software targeting to the Zynq hardware, including external mode, processor-in-the-loop, and full deployment.

The software interface model is generated in HDL Workflow Advisor, after you load custom logic to the FPGA. It provides data path control, and an interface to any AXI-Lite ports you defined on your FPGA targeted subsystem. You can generate ARM code from this model that drives or responds to the AXI-Lite ports on the FPGA user logic. You can deploy this code on the board to run along with the FPGA user logic.

This step requires Embedded Coder® and Embedded Coder Support Package for Xilinx Zynq Platform.

See Target an ARM Processor on Zynq Hardware and Models Generated from FPGA Targeting.

Related Examples

More About