Extract Lane Information from Recorded Camera Data for Scene Generation
This example shows how to extract the lane information required for generating high-definition scenes from raw camera data.
Lane boundaries are crucial for interpreting the position and motion of moving vehicles. They are also useful for localizing vehicles on map.
In this example, you:
Detect lane boundaries from recorded forward-facing monocular camera images using a pretrained deep learning model.
Transform detected lane boundaries from image coordinates to real-world vehicle coordinates using monocular camera sensor parameters.
Track noisy lane boundaries by using
laneBoundaryTracker
to reduce lane detection errors.
You can use the filtered lane information to generate a high-definition road scene. For more information about creating scene from lane information, see the Generate High Definition Scene from Lane Detections and OpenStreetMap example.
Load Camera Sensor Data
Download a ZIP file containing the camera sensor data with camera parameters, and then unzip the file. This data set has been collected using a forward-facing camera mounted on an ego vehicle.
dataFolder = tempdir; dataFilename = "PolysyncSensorData_23a.zip"; url = "https://ssd.mathworks.com/supportfiles/driving/data/"+dataFilename; filePath = fullfile(dataFolder, dataFilename); if ~isfile(filePath) websave(filePath,url); end unzip(filePath, dataFolder); dataset = fullfile(dataFolder,"PolysyncSensorData"); data = load(fullfile(dataset,"sensorData.mat")); monocamData = data.CameraData;
monocamData
data is a table with two columns:
timeStamp
— Time, in microseconds, at which the image data was captured.fileName
— Filenames of the images in the data set.
The images are located in the Camera
folder in the dataset
directory. Create a table that contains the file paths of these images with their relative timestamps by using the helperUpdateTable
function.
imageFolder = "Camera";
monocamData = helperUpdateTable(monocamData,dataset,imageFolder);
Display the first image from the monocamData
.
img = imread(monocamData.filePath{1}); imshow(img)
Detect Lane Boundaries
In this example, you use a pretrained deep neural network model to detect lane boundaries. Download the pretrained model available as a ZIP file and then unzip. This model requires the Deep Learning Toolbox™ Converter for ONNX™ Model Format support package. You can install Deep Learning Toolbox™ Converter for ONNX™ Model Format from the Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons The downloaded model is 90 MB.
modelFilename = "CameraLaneDetectorRESA.zip"; modelUrl = "https://ssd.mathworks.com/supportfiles/driving/data/"+modelFilename; filePath = fullfile(dataFolder,modelFilename); if ~isfile(filePath) websave(filePath,modelUrl); end unzip(filePath,dataFolder); modelFolder = fullfile(dataFolder,"CameraLaneDetectorRESA"); model = load(fullfile(modelFolder,"cameraLaneDetectorRESA.mat")); laneBoundaryDetector = model.net;
Specify these parameters for the lane boundary detection model.
Threshold
— Confidence score threshold for boundary detection. The model ignores detections with a confidence score less than the threshold value. If you observe false detections, try increasing this value.CropHeight
— Crop height of detection image. The model crops the images to the specified height, removing everything above that line and processing the images from the bottom to the specified height. Specify this value to remove parts of the image above the road.ExecutionEnvironment
— Execution environment, specify as"cpu"
,"gpu"
, or"auto".
params.threshold = 0.5;
params.cropHeight = 190;
params.executionEnvironment = "auto";
The helperDetectLaneBoundaries
function detects lane boundaries for each image in monocamData
. Note that, depending on your hardware configuration, this function takes a significant amount of time to run.
laneBoundaries = helperDetectLaneBoundaries(laneBoundaryDetector,monocamData,params);
laneBoundaries
is an M
-by-6
cell array containing lane boundary points in image coordinates. M
represents the number of images. The six columns represent the lanes detected in the image, from left to right. An empty cell indicates that no lane was detected for that timestamp.
Note: The Deep Learning model detects only the lane boundaries. It does not classify the lane boundaries into classes such as solid and dashed.
Read an image from the camera data.
imgIdx = 10; I = imread(monocamData.filePath{imgIdx});
Overlay the lane boundaries on the image, and display the overlaid image using the helperViewLaneOnImage
function.
helperViewLaneOnImage(laneBoundaries(imgIdx,:),I)
Transform Lane Boundary Coordinates to Vehicle Coordinates
To generate a real-world scene, you must transform the detected lane boundaries from image coordinates to vehicle coordinates using the camera parameters. If you do not know the camera parameters, you can estimate them. For more information about estimating camera parameters, see Calibrate a Monocular Camera. You can also use the estimateMonoCameraFromScene
function to estimate approximate camera parameters directly from a camera image.
Specify the camera intrinsic parameters of focal length (fx, fy), principal point (cx, cy), and image size.
intrinsics = data.Intrinsics
intrinsics = struct with fields:
fx: 800
fy: 800
cx: 320
cy: 240
imageSize: [480 640]
Create a cameraIntrinsics
object.
focalLength = [intrinsics.fx intrinsics.fy]; principalPoint = [intrinsics.cx intrinsics.cy]; imageSize = intrinsics.imageSize; intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);
Create a monoCamera
object using the camera intrinsic parameters, height, and location. Display the object properties.
camHeight = data.cameraHeight;
camLocation = data.cameraLocation;
sensorParams = monoCamera(intrinsics,camHeight,"SensorLocation",camLocation)
sensorParams = monoCamera with properties: Intrinsics: [1×1 cameraIntrinsics] WorldUnits: 'meters' Height: 1.1000 Pitch: 0 Yaw: 0 Roll: 0 SensorLocation: [2.1000 0]
The helperDetectionsToVehicle
function transforms the lane boundary points from image coordinates to vehicle coordinates. The function also fits the parabolic lane boundary model to the transformed boundary points using the fitPolynomialRANSAC
function.
transformedDetections = helperDetectionsToVehicle(laneBoundaries,sensorParams);
Visualize the transformed points in a bird's-eye-view.
currentFigure = figure(Position=[0 0 1400 600], Name="Lane Detections"); hPlot = axes(uipanel(currentFigure, Position=[0 0 0.5 1], Title="Transformed Lanes")); bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot); cam = axes(uipanel(currentFigure,Position=[0.5 0 0.5 1],Title="Camera View")); helperPlotDetectedLanesBEV(bep,cam,transformedDetections,monocamData)
Track Lane Boundaries Using laneBoundaryTracker
If your lane boundary detections contain noise or they are not consistent, you can track them using laneBoundaryTracker
to get consistent boundaries.
Define a laneBoundaryTracker
object and specify these properties:
MeasurementNoise
— Specified as0.5,
this sets the uncertainty in the lane boundary detections.PostProcessingFcn
— Specify ashelperPostProcessLaneBoundaries
function, to obtain tracked lane boundaries aslaneData
objects.
lbTracker = laneBoundaryTracker(MeasurementNoise=0.5,... %Measurement noise covariance PostProcessingFcn=@helperPostProcessLaneBoundaries ... %Function handle to customize the output of the tracker )
lbTracker = laneBoundaryTracker with properties: LaneBoundaryModel: 'Parabolic' AssignmentThreshold: 20 ConfirmationThreshold: 0.9500 DeletionThreshold: 0.0500 DetectionProbability: 0.9000 FalseAlarmDensity: 1.0000e-06 InitialExistenceProbability: 0.9000 MeasurementNoise: 0.5000 ProcessNoise: 1.0000e-05 PreProcessingFcn: @laneBoundaryTracker.preProcessParabolicLaneBoundaries PostProcessingFcn: @helperPostProcessLaneBoundaries
Specify the detection timestamps for the tracker. Note that the tracker requires timestamps in seconds, so you must convert the timestamps from the sensor from microseconds to seconds.
You exercise the tracker with lane boundary detections, corresponding timestamps, and specify the ShowProgress
name-value pair as true
to display a progress bar while tracking lane boundaries in batch mode.
% Load the timestamps generated by the sensor. timeStamps = double(monocamData.timeStamp); % The timestamps of the camera sensor are in microseconds. Convert to % seconds and offset from the first timestamp. tsecs = timeStamps*(10^-6); tsecs = tsecs - tsecs(1); % Track lane boundary detections. trackedLaneBoundaries = lbTracker(transformedDetections,tsecs,ShowProgress=true);
Visualize and compare the lane boundaries before and after tracking.
currentFigure = figure(Name="Compare Lane Boundaries",Position=[0 0 1400 600]); hPlotSmooth = axes(uipanel(currentFigure,Position=[0 0 0.5 1],Title="Tracked Boundaries")); bepTracked = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlotSmooth); hPlot = axes(uipanel(currentFigure,Position=[0.5 0 0.5 1],Title="Detected Boundaries")); bep = birdsEyePlot(XLim=[0 30],YLim=[-20 20],Parent=hPlot); helperCompareLanes(bepTracked,trackedLaneBoundaries,bep,transformedDetections);
You can use the laneData
object trackedLaneBoundaries
as a first input argument to the updateLaneSpec
function. Using this function, you can map the tracked lane boundaries to a standard-definition road network to create a high-definition road scene. For more information, see the Generate High Definition Scene from Lane Detections and OpenStreetMap example.
Display the tracked lane boundary data.
trackedLaneBoundaries
trackedLaneBoundaries = laneData with properties: TimeStamp: [713×1 double] LaneBoundaryData: {713×1 cell} LaneInformation: [713×3 struct] StartTime: 0.0500 EndTime: 35.6422 NumSamples: 713
You can use the trackedLaneBoundaries
data in the Generate High Definition Scene from Lane Detections and OpenStreetMap example to generate ASAM OpenDRIVE® or Road Runner scene from these detections.
Helper Functions
helperDetectLaneBoundaries
— Detect lane boundaries in images.
function dets = helperDetectLaneBoundaries(model,images,params) % Set the networkInputSize as [368,640]. params.networkInputSize = [368 640]; % Initialize detections. dets = []; f = waitbar(0,"Detecting lanes..."); % Loop over the images and detect. for i = 1:size(images,1) I = imread(images.filePath{i}); waitbar(i/numel(images),f,"Detecting lanes....") detections = helperDetectLaneOnSingleImage(model,I,params,params.executionEnvironment,FitPolyLine=false); % Append the detections. dets = [dets; detections]; end close(f) end
helperDetectionsToVehicle
— Transform points from image to vehicle coordinates using the imageToVehicle
function and fit a parabolic model to them using the fitPolynomialRANSAC
function.
function converted = helperDetectionsToVehicle(laneMarkings,sensor) % Convert detections from image frame to vehicle frame and fit polynomial % on the detections. numframes = size(laneMarkings,1); numLanes = size(laneMarkings,2); converted = cell(1,size(laneMarkings,2)); s = rng(100); C = onCleanup(@()rng(s)); % Specify the boundary model as parabolicLaneBoundary. model = @parabolicLaneBoundary; % Specify the polynomial degree as 2 for parabolic lane boundaries. degree = 2; % Specify the extent for considering detections as from 3 meters to 30 meters extent = [3 30]; % Set the max distance of points to consider in the lane boundary model as % 0.2 meters. n = 0.2; % For each frame. for i = 1:numframes % For each detected lane boundary. laneBoundaries = cell(1,size(laneMarkings,2)); for j = 1:numLanes if ~isempty(laneMarkings{i,j}) imageCoords = laneMarkings{i,j}; % Function imageToVehicle performs the transformation. vehicleCoords = imageToVehicle(sensor,imageCoords); % Fits a parabolic model on the points. p = fitPolynomialRANSAC(vehicleCoords,degree,n); % Create the parabolicLaneBoundary object. boundaries = model(p); boundaries.XExtent = extent; laneBoundaries{j} = boundaries; end end laneBoundaries = laneBoundaries(~cellfun("isempty",laneBoundaries)); converted{i}= laneBoundaries; end end
helperPostProcessLaneBoundaries
— Customize tracker output and convert lane boundaries into laneData format.
function trackedLanes = helperPostProcessLaneBoundaries(tracks,varargin) % helperExtractLaneBoundaries returns a laneData object containing lane % boundary detections. %Use default post processing function and customize its output in laneData %format. lbTracks = laneBoundaryTracker.postProcessParabolicLaneBoundaries(tracks,varargin); %Create an empty laneData object. trackedLanes = laneData; for i = 1:numel(lbTracks) boundaries = lbTracks{i}; for j = 1:numel(boundaries) info(j) = struct('TrackID',boundaries{j}.TrackID); %#ok<AGROW> end if ~isempty(boundaries) lbs = [boundaries{:}]; %Add boundaries to the laneData object. trackedLanes.addData(boundaries{1}.UpdateTime,[lbs.LaneBoundary],LaneInformation=info) end end end
See Also
monoCamera
| cameraIntrinsics
| trackerGNN
(Sensor Fusion and Tracking Toolbox) | singer
(Sensor Fusion and Tracking Toolbox) | getMapROI
| roadprops
| selectActorRoads
Related Topics
- Overview of Scenario Generation from Recorded Sensor Data
- Smooth GPS Waypoints for Ego Localization
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Generate RoadRunner Scene from Recorded Lidar Data
- Extract Vehicle Track List from Recorded Camera Data for Scenario Generation
- Generate Scenario from Actor Track List and GPS Data