Lidar Toolbox

Design, analyze, and test lidar processing systems

Lidar Toolbox™ provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. Lidar Toolbox supports lidar-camera cross calibration for workflows that combine computer vision and lidar processing.

You can train custom detection and semantic segmentation models using deep learning and machine learning algorithms such as PointSeg, PointPillars, and SqueezeSegV2. The Lidar Labeler App supports manual and semi-automated labeling of lidar point clouds for training deep learning and machine learning models. The toolbox lets you stream data from Velodyne® lidars and read data recorded by Velodyne and IBEO lidar sensors.

Lidar Toolbox provides reference examples illustrating the use of lidar processing for perception and navigation workflows. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and deployment.

Get Started:

Deep Learning for Lidar

Apply deep learning algorithms for object detection and semantic segmentation on lidar data.

Semantic segmentation using SqueezeSegV2.

Object Detection on Lidar Point Clouds

Detect and fit oriented bounding boxes around objects in lidar point clouds. Design, train, and evaluate robust detectors such as PointPillars networks.

Lidar Labeling

Apply built-in or custom algorithms to automate lidar point cloud labeling with the Lidar Labeler App, and evaluate automation algorithm performance.

Lidar Labeler App.

Lidar-Camera Calibration

Cross-calibrate lidar and camera sensors to estimate lidar-camera transforms for fusing camera and lidar data.

Lidar and Camera Calibration

Estimate the rigid transformation matrix between a lidar and a camera using Lidar Camera Calibrator App.

Lidar Camera Calibrator App

Lidar-Camera Integration

Fuse lidar and camera data to project lidar points on images, fuse color information in lidar point clouds, and estimate 3D bounding boxes in lidar using 2D bounding boxes from a co-located camera.

Bounding box transformation from image to lidar point clouds.

Lidar Data Processing

Apply preprocessing to improve the quality of lidar point cloud data and extract basic information from it.

Lidar Processing Algorithms

Convert unorganized point clouds to organized point clouds. Apply functions and algorithms for ground segmentation, downsampling, median filtering, normal estimation, transforming point clouds, and extracting point cloud features.

Ground Segmentation from Lidar Point Clouds

2D Lidar Processing

Estimate positions and create occupancy maps using 2D lidar scans.

2D Lidar SLAM

Implement Simultaneous Localization and Mapping (SLAM) algorithms from 2D lidar scans. Estimate positions and create binary or probabilistic occupancy grids using real or simulated sensor readings.

Streaming, Reading, and Writing Lidar Data

Read and write lidar point cloud data and stream live data from sensors.

Velodyne Lidar Sensor Acquisition

Acquire live lidar point clouds from Velodyne Lidar sensors, visualize them in MATLAB, and develop lidar sensing applications.

Getting started with lidar acquisition in MATLAB.

Reading and Writing Lidar Point Cloud Data

Read lidar data in different file formats, including PCAP, LAS, ibeo, PCD, and PLY. Write lidar data to PLY and PCD files.

Reading lidar point cloud data in LAS format.

Feature Extraction and Registration

Register lidar point clouds and build 3D maps using Simultaneous Localization and Mapping (SLAM).

Feature Extraction from Lidar Point Clouds

Extract fast point feature histogram (FPFH) descriptors from lidar point clouds.

Extracting and matching features from lidar point clouds.

Lidar Point Cloud Registration

Implement 3D SLAM algorithms by stitching together lidar point cloud sequences from ground and aerial lidar data.

Map building from a lidar point cloud sequence.