Main Content

lidarObjectDetectorTrainingData

Create training data for lidar object detection

Since R2022a

    Description

    trainingData = lidarObjectDetectorTrainingData(gTruth) creates a table of training data from the specified ground truth label data. Use this training data to train the deep learning networks in Lidar Toolbox™ for lidar object detection.

    [ptds,blds] = lidarObjectDetectorTrainingData(gTruth) creates a file datastore and a box label datastore training data from the specified ground truth label data. To create a datastore for training the network, combine the file and box label datastores by using combine(ptds, blds). Use the combined datastore to train the deep learning networks in Lidar Toolbox for lidar object detection.

    example

    ___ = lidarObjectDetectorTrainingData(gTruth,Name=Value) uses additional options specified by one or more name-value arguments.

    Examples

    collapse all

    This example shows how to generate training data to train a deep learning network for point cloud object detection.

    Step 1: Create Ground Truth from Data Source

    Specify the name of the file containing the point cloud data. The input file is a Velodyne® packet capture (PCAP) file.

    sourceName = fullfile(toolboxdir("vision"),"visiondata",...
        "lidarData_ConstructionRoad.pcap");

    Specify the parameters for loading the point cloud sequence from the data source.

    sourceParams = struct();
    sourceParams.DeviceModel = "HDL32E";
    sourceParams.CalibrationFile = fullfile(matlabroot,"toolbox","shared",...
        "pointclouds","utilities","velodyneFileReaderConfiguration",...
        "HDL32E.xml");

    Load the point cloud data from the specified source file by using the vision.labeler.loading.VelodyneLidarSource function.

    dataSource = vision.labeler.loading.VelodyneLidarSource();
    dataSource.loadSource(sourceName,sourceParams);

    Define class labels to specify the names of the objects in the input point cloud.

    ldc = labelDefinitionCreatorLidar();
    addLabel(ldc,"Car","Cuboid");
    labelDefs = ldc.create();

    Define bounding boxes to specify the location of each object in the point cloud sequence, at each timestamp. Store information about bounding boxes and timestamp to a table.

    numPCFrames = numel(dataSource.Timestamp{1});
    carData = cell(numPCFrames,1);
    carData{1} = [1.0223 13.2884 1.1456 8.3114 3.8382 3.1460 0 0 0];
    lidarData = timetable(dataSource.Timestamp{1},carData,...
        VariableNames="Car");

    Create ground truth object.

    gTruth = groundTruthLidar(dataSource,labelDefs,lidarData);

    Step 2: Generate Training Data

    Create point cloud and box label datastores from the labeled ground truth by using the lidarObjectDetectorTrainingData function.

    [pcds,bxds] = lidarObjectDetectorTrainingData(gTruth);
    Write point cloud extracted for training to folder: 
        /tmp/Bdoc24b_2725827_1669037/tpa4cc45e3/lidar-ex45787688
    
    Writing 1 point clouds extracted from dataSource1...Completed.
    

    Generate training data by combining the point cloud and box label datastores.

    trainingData = combine(pcds,bxds);

    Step 3: Configure Object Detector

    Specify the class names, anchor boxes, point cloud range, and the voxel size. Configure the PointPillars object detector for training and inference.

    classNames = "Car";
    anchorBoxes = {[1.9,4.5,1.7,-1.78,0; 1.9,4.5,1.7,-1.78,1.57]};
    pcRange = [0,69.12,-39.68,39.68,-5,5];
    voxSize = [0.16,0.16];
    detector = pointPillarsObjectDetector(pcRange,classNames,anchorBoxes,...
        VoxelSize=voxSize);

    Step 4: Train Object Detector

    Specify training options.

    options = trainingOptions("adam",...
        Plots="none",...
        MaxEpochs=2,...
        MiniBatchSize=1,...
        GradientDecayFactor=0.9,...
        SquaredGradientDecayFactor=0.999,...
        InitialLearnRate=0.0002,...
        LearnRateDropPeriod=15,...
        LearnRateDropFactor=0.8,...
        ExecutionEnvironment="cpu",...
        DispatchInBackground=false,...
        BatchNormalizationStatistics="moving",...
        ResetInputNormalization=false);

    Train the PointPillars object detector to detect classes specified in the input training data. You can use the trained detector to detect objects in a test point cloud by using the detect function.

    [detector,info] = trainPointPillarsObjectDetector(trainingData,detector,options);
    *************************************************************************
    Processing data in minibatchqueue....
    
    *************************************************************************
    Data processing complete.
    
    *************************************************************************
    Training a PointPillars Object Detector for the following object classes:
    
    * Car
    
     
        Epoch    Iteration    TimeElapsed    LearnRate    TrainingLoss
        _____    _________    ___________    _________    ____________
    
    *************************************************************************
    Detector training complete.
    *************************************************************************
    

    Input Arguments

    collapse all

    Lidar ground truth label data, specified as a groundTruthLidar object or an array of groundTruthLidar objects. To create ground truth objects from existing ground truth data, use the groundTruthLidar object. You can also use the Lidar Labeler app to label a point cloud and generate the ground truth data.

    Note

    The lidarObjectDetectorTrainingData function imports only the ground truth data with cuboid ROI labels. Ground truth data with other label types are ignored.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: trainingData = lidarObjectDetectorTrainingData(gTruth, PointCloudFormat='ply') writes the extracted point clouds to ply format.

    Factor for subsampling point clouds in the ground truth data source, specified as one of these values:

    • "auto" — If the input is a groundTruthLidar object or an array of groundTruthLidar objects. The function samples data sources with timestamps, such as a point cloud sequence, with a factor of 5. This is the default value.

    • positive integer — If the input is a groundTruthLidar object. Uniform sampling factor is applied to all the point cloud samples in the data source.

    • vector of positive integers — If the input is an array of groundTruthLidar objects. The k th element in the vector is applied as the sampling factor for data sources in the k th ground truth object in the array.

    For a sampling factor of N, the returned training data includes every Nth point cloud sample in the ground truth data source. The function ignores ground truth samples with empty label data.

    Use sampled data to reduce repeated data, such as a sequence of point clouds with the same scene and labels. It can also help in reducing training time.

    Note

    For a sequence of point clouds, set the sampling factor to 1 to create training data with all the point clouds in the input sequence.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

    Folder name to write extracted point cloud samples to, specified as a string scalar or character vector. The specified folder must exist and have write permissions.

    Use this name-value argument only if the data source in the groundTruthLidar object is a VelodyneLidarSource, LasFileSequenceSource, CustomPointCloudSource, or RosbagSource object. You can know this from the DataSource property of the groundTruthLidar object. For other data sources, the lidarObjectDetectorTrainingData function ignores this value, if specified.

    Data Types: char | string

    Point cloud file format, specified as a character vector. File formats must be supported by pcwrite. By default, the function writes the point cloud to pcd format.

    Use this name-value argument only if the data source in the groundTruthLidar object is a VelodyneLidarSource, LasFileSequenceSource, CustomPointCloudSource, or RosbagSource object. You can know this from the DataSource property of the groundTruthLidar object. For other data sources, the lidarObjectDetectorTrainingData function ignores this value, if specified.

    Data Types: char

    Prefix for output point cloud file names, specified as a string scalar or character vector. The point cloud files are named as:

    <source_name><source_number>_<pointcloud_number>.<pointcloud_format>

    The NamePrefix parameter sets the value for <source_name>. By default, the <source_name> is the name of the data source from which the point clouds are extracted. <source_name>

    Use this name-value argument only if the data source in the groundTruthLidar object is a VelodyneLidarSource, LasFileSequenceSource, CustomPointCloudSource, or RosbagSource object. You can know this from the DataSource property of the groundTruthLidar object. For other data sources, the lidarObjectDetectorTrainingData function ignores this value, if specified.

    Data Types: char | string

    Flag to display writing progress in the MATLAB® command window, specified as one of these values:

    • true or 1 — Displays information about the write progress.

    • false or 0 — Does not display information about the write progress.

    Use this name-value argument only if the data source in the groundTruthLidar object is a VelodyneLidarSource, LasFileSequenceSource, CustomPointCloudSource, or RosbagSource object. You can know this from the DataSource property of the groundTruthLidar object. For other data sources, the lidarObjectDetectorTrainingData function ignores this value, if specified.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

    Output Arguments

    collapse all

    Labeled data for training the network, returned as a table with two or more columns. The first column of the table contains point cloud file names with paths. Each of the remaining columns correspond to a cuboid ROI label and contains the locations of bounding boxes in the point cloud sample (specified in the first column), for that label. The bounding boxes are specified as a

    M-by-9 numeric matrix with rows of the form [xctr, yctr, zctr, xlen, ylen, zlen, xrot, yrot, zrot], where:

    • M is the number of labels in the frame.

    • xctr, yctr, and zctr specify the center of the cuboid.

    • xlen, ylen, and zlen specify the length of the cuboid along the x-axis, y-axis, and z-axis, respectively, before rotation has been applied.

    • xrot, yrot, and zrot specify the rotation angles for the cuboid along the x-axis, y-axis, and z-axis, respectively. These angles are clockwise-positive when looking in the forward direction of their corresponding axes.

    The figure shows how these values determine the position of a cuboid.

    Cuboid with center point, lengths, and rotation angles labeled

    Data Types: table

    Extracted point cloud data, returned as a fileDatastore object. The point cloud data must contain at least one class label. The function ignores unlabeled point cloud data.

    Extracted ROI labels, returned as a boxLabelDatastore object. The datastore contains M-by-9 matrices of M bounding boxes and categorical vectors of cuboid ROI label names.

    The bounding boxes are specified as a

    M-by-9 numeric matrix with rows of the form [xctr, yctr, zctr, xlen, ylen, zlen, xrot, yrot, zrot], where:

    • M is the number of labels in the frame.

    • xctr, yctr, and zctr specify the center of the cuboid.

    • xlen, ylen, and zlen specify the length of the cuboid along the x-axis, y-axis, and z-axis, respectively, before rotation has been applied.

    • xrot, yrot, and zrot specify the rotation angles for the cuboid along the x-axis, y-axis, and z-axis, respectively. These angles are clockwise-positive when looking in the forward direction of their corresponding axes.

    The figure shows how these values determine the position of a cuboid.

    Cuboid with center point, lengths, and rotation angles labeled

    Version History

    Introduced in R2022a