Lane-Following Control with Monocular Camera Perception

This example enables you to simulate the combined effects of a lane-following controller algorithm with a monocular camera-based perception algorithm in a 3D simulation environment.

In this example, you:

  1. Explore the architecture of the simulation test bench model

  2. Visualize open loop test scenario

  3. Simulate the control algorithm with a probabilistic detection sensor

  4. Simulate the control algorithm with a vision processing algorithm

  5. Explore additional scenarios

Introduction

Control algorithms used in automated driving applications are often designed and tested with Simulink models for closed-loop simulation. Closed-loop simulation involves modeling vehicle dynamics and probabilistic sensor models in a simplified virtual environment. For more information on this workflow, see Lane Following Control with Sensor Fusion and Lane Detection. In that example, the controller is partitioned as a model reference that supports simulation and code generation.

Perception algorithms in automated driving applications are often designed and tested with MATLAB code using recorded data. For example, vision-based perception algorithms are often designed and tested against recorded video. For more information on this workflow, see Visual Perception Using Monocular Camera (Automated Driving Toolbox). In that example, recorded video is used to test a vision processing algorithm partitioned as a MATLAB class.

System engineers often want to explore how the combination of perception and controller algorithms affects system performance. To test the integration of a controller with a vision-based perception algorithm, integration with a photorealistic simulation environment is required. In this example, you enable system-level simulation through integration with the Unreal Engine. The 3D simulation environment requires a Windows® 64-bit platform.

if ~ispc
    error("The 3D simulation environment requires a Windows 64-bit platform");
end

To ensure reproducibility of the simulation results, set the random seed.

rng(0)

Explore the Architecture of the Simulation Test Bench Model

You will use a system-level simulation test bench model to explore behavior of the control and vision processing algorithms for a lane following system. Open the system-level simulation test bench model.

open_system("LaneFollowingWithMonoCameraTestBench")

The test bench model is partitioned to include the below components

  1. Vision Detector Variant specifies the fidelity of the vision detection algorithm

  2. Control Algorithm specifies sensor fusion, lateral control, and longitudinal control

  3. Vehicle Dynamics specifies the dynamics model for the ego vehicle

  4. Simulation 3D Scenario specifies road, vehicles, and sensors

  5. Metrics Assessment assesses system level behavior

The Control Algorithm, Vehicle Dynamics, and Metrics Assessment are based on the example Lane Following Control with Sensor Fusion and Lane Detection

The Vision Detector Variant subsystem allows you to select the fidelity of the vision detection algorithm based on the types of tests you would like to run. Open the Vision Detector Variant subsystem.

open_system("LaneFollowingWithMonoCameraTestBench/Vision Detector Variant")

  • Probabilistic Detection Sensor variant enables you to test integration of the Control Algorithm with the Simulation 3D scenario. This variant uses a Vision Detection Generator block to synthesize vehicle and lane detections based on actor truth positions. This configuration helps you verify interactions with vehicles and the radar sensor in the 3D simulation environment.

  • Vision Processing Algorithm variant enables testing integration of the Control Algorithm with a Vision Processing algorithm in the 3D simulation environment. This variant use a MATLAB based lane boundary and vehicle detection algorithm based on the Visual Perception Using Monocular Camera (Automated Driving Toolbox) example. The primary difference from the example is the addition of a wrapper system object, helperMonoSensorWrapper.m, that packs output data into buses required by LfMonoCameraRefMdl. This system object also contains a lane tracker to improve performance of lane detection in crowded conditions. Since the Vision Processing algorithm operates on an image return by the camera sensor, the Vision Processing variant will take longer to execute than the Probabilistic model.

The Simulation 3D Scenario subsystem configures the road network, sets vehicle positions, and synthesizes sensors. Open the Simulation 3D Scenario subsystem.

open_system("LaneFollowingWithMonoCameraTestBench/Simulation 3D Scenario")

Notice how the scene and road network are specified:

  • Simulation 3D Scene Configuration block has the Scene description parameter set to Curved road.

  • Scenario Reader block is configured to use a driving scenario that contains a road network that closely matches a section of the road network.

Notice how the positions of the vehicles are specified:

  • Ego inport controls the position of the ego Simulation 3D Vehicle with Ground Following 1 block. This block has its Name parameter is set to SimulinkVehicle1.

  • Pose to Simulation 3D Vehicle block converts the ego pose coordinate system (with respect to the vehicle rear axle) to the Simulation 3D coordinate system (with respect to vehicle center).

  • Scenario Reader block outputs actor poses which control the position of target Simulation 3D Vehicle with Ground Following blocks.

  • Pose to Simulation 3D Vehicle blocks are used to transform the coordinate systems.

Notice how sensors connected to the ego vehicle are specified:

  • Simulation 3D Camera Forward Facing block is attached to the ego vehicle.

  • Simulation 3D Probabilistic Radar block is attached to the ego vehicle.

  • Measurement Bias Center to Rear Axle block converts the coordinate system of the Simulation 3D Probabilistic Radar (with respect to vehicle center) to the pose coordinates (with respect to vehicle rear axle)

Visualize Open Loop Test Scenario

This example contains a helper function scenario_LFACC_03_Curve_StopnGo which generates a driving scenario that is compatible with the LaneFollowingWithMonoCameraTestBench model. This is an open loop scenario on a curved road and includes multiple target vehicle. The road centers and lane markings closely match a section of the curved road scene provided with the 3D simulation environment. The scenario has the same number of vehicles as the model, and they have the same dimensions. In this scenario a lead vehicle slows down in front of the ego vehicle while other vehicles travel in adjacent lanes.

Plot the scenario in open loop to see the interactions of the ego vehicle and target vehicles.

hFigScenario = plotLFScenario("scenario_LFACC_03_Curve_StopnGo");

The ego vehicle is not under closed-loop control, so a collision occurs with a slower moving lead vehicle. The goal of the closed-loop system will be to follow the lane and maintain a safe distance from the lead vehicles. In LaneFollowingWithMonoCameraTestBench the ego vehicle has the same initial velocity and initial position as in the open loop scenario.

Simulate the Control Algorithm with a Probabilistic Detection Sensor

In this section you will test interactions between the control algorithm and the 3D simulation environment. This will help you verify interactions with vehicles and the radar sensor. Configure the test bench model to use one of the scenarios with the probabilistic vision sensor variant. Then run the simulation.

laneFollowingWithMonoCameraSetup(...
    "scenario_LFACC_03_Curve_StopnGo",...
    "ProbabilisticDetectionSensor");
sim("LaneFollowingWithMonoCameraTestBench");
   Assuming no disturbance added to measured output channel #3.
-->Assuming output disturbance added to measured output channel #2 is integrated white noise.
   Assuming no disturbance added to measured output channel #1.
-->Assuming output disturbance added to measured output channel #4 is integrated white noise.
-->The "Model.Noise" property of the "mpc" object is empty. Assuming white noise on each measured output channel.

Plot the lateral controller performance results.

hFigLatResults = plotLFLateralResults(logsout);

For this simulation, the:

  • Detected lane boundary lateral offsets plot shows the lateral offsets for the detected left and right lanes boundaries. The detected values are close to the truth of the lane.

  • Lateral deviation plot shows the lateral deviation of the ego vehicle from the centerline of the lane. The lateral deviation is close to 0, which implies that the ego vehicle closely follows the centerline. Small deviations occur when the vehicle is changing velocity to avoid collision with another vehicle.

  • Relative yaw angle plot shows the relative yaw angle between ego vehicle and the centerline of the lane. The relative yaw angle is very close to 0, which implies that the heading angle of ego vehicle matches the yaw angle of the centerline closely.

  • Steering angle plot shows the steering angle of ego vehicle. The steering angle trajectory is smooth.

Plot the longitudinal controller performance results.

hFigLongResults = plotLFLongitudinalResults(logsout,time_gap,default_spacing);

For this simulation, the:

  • Relative longitudinal distance plot shows the distance between the ego vehicle and the Most Important Object (MIO). The MIO represents the closest vehicle ahead of and in the same lane as the ego vehicle. In this case the ego is approaching the MIO and is getting close to or exceeding the safe distance in some cases.

  • Relative longitudinal velocity plot shows the relative velocity between the and the MIO. In this example, the vision processing algorithm only detects positions, so the tracker in the Control Algorithm estimates the velocity. The estimated velocity lags the actual (truth) MIO relative velocity.

  • Absolute acceleration plot shows that the controller commands the vehicle to decelerate when it gets too close to the MIO.

  • Absolute velocity plot shows the ego vehicle initially follows the set velocity, but when the MIO slows down, the ego also slows down to avoid a collision.

During simulation, signals were logged to the base workspace as logsout and the output of the camera sensor was recorded to forwardFacingCamera.mp4. You can use the plotLFDetectionResults function to visualize the simulated detections similar to how recorded data was explored in the Forward Collision Warning Using Sensor Fusion (Automated Driving Toolbox) example. You can also record the visualized detections to a video file to enables review by others that may not have access to MATLAB.

Plot the detection results from logged data, generate a video, and open in the Video Viewer app.

hVideoViewer = plotLFDetectionResults(...
    logsout, "forwardFacingCamera.mp4", scenario, camera, radar, scenarioFcnName,...
    "RecordVideo", true,...
    "RecordVideoFileName", scenarioFcnName + "_PDS",...
    "OpenRecordedVideoInVideoViewer", true,...
    "VideoViewerJumpToTime", 10.2);

Play the generated video.

  • Front Facing Camera shows the image returned by the camera sensor. The left lane is plotted in red and the right in green. These lanes are returned by the probabilistic detection sensor. Tracked detections are also overlaid on the video.

  • Birds-Eye Plot shows true vehicle positions, sensor coverage areas, probabilistic detections, and track outputs. The plot title includes the simulation time so that you can correlate events between the video and previous static plots.

Close figures.

close(hFigScenario)
close(hFigLatResults)
close(hFigLongResults)
close(hVideoViewer)

Simulate the Control Algorithm with a Vision Processing Algorithm

In this section you test the control algorithm and vision processing algorithm with the scenario in the 3D simulation environment. This enables you to explore the effect the vision processing algorithm on system performance. Configure the test bench model to use the same scenario with the vision processing variant.

laneFollowingWithMonoCameraSetup(...
    "scenario_LFACC_03_Curve_StopnGo",...
    "VisionProcessingAlgorithm");
sim("LaneFollowingWithMonoCameraTestBench");
   Assuming no disturbance added to measured output channel #3.
-->Assuming output disturbance added to measured output channel #2 is integrated white noise.
   Assuming no disturbance added to measured output channel #1.
-->Assuming output disturbance added to measured output channel #4 is integrated white noise.
-->The "Model.Noise" property of the "mpc" object is empty. Assuming white noise on each measured output channel.

Plot the lateral controller performance results.

hFigLatResults = plotLFLateralResults(logsout);

The left and right lane boundaries are detected but are noisier now that the vision processing algorithm has been integrated. This affects the lateral deviation. The lateral deviation is still small, but is larger than the run with the probabilistic detection sensor variant.

Plot the longitudinal controller performance results.

hFigLongResults = plotLFLongitudinalResults(logsout,time_gap,default_spacing);

There are some discontinuities in the relative distance and relative velocity. These discontinuities are due to imperfections of the vision processing algorithm on system performance. Even with these discontinuities, the resulting ego acceleration and velocity are similar to the results using the probabilistic detection sensor variant.

Plot the detection results from logged data, generate a video, and open the Video Viewer app.

hVideoViewer = plotLFDetectionResults(...
    logsout, "forwardFacingCamera.mp4", scenario, camera, radar, scenarioFcnName,...
    "RecordVideo", true,...
    "RecordVideoFileName", scenarioFcnName + "_VPA",...
    "OpenRecordedVideoInVideoViewer", true,...
    "VideoViewerJumpToTime", 10.2);

Explore Additional Scenarios

In the previous sections you explored the scenario_LFACC_03_Curve_StopnGo scenario using both the ProbabilisticDetectionSensor and VisionProcessingAlgorithm variants. This example provides additional scenarios that are compatible with the LaneFollowingWithMonoCameraTestBench model. Below is a list of compatible scenarios that are provided with this example.

  scenario_LF_01_Straight_RightLane
  scenario_LF_02_Straight_LeftLane
  scenario_LF_03_Curve_LeftLane
  scenario_LF_04_Curve_RightLane
  scenario_LFACC_01_Curve_DecelTarget
  scenario_LFACC_02_Curve_AutoRetarget
  scenario_LFACC_03_Curve_StopnGo
  scenario_LFACC_04_Curve_CutInOut
  scenario_LFACC_05_Curve_CutInOut_TooClose

These scenarios represent two types of testing.

  • The scenarios with the scenario_LF_ prefix enable you to test lane detection and following algorithms without obstruction by other vehicles. The vehicles still exist in the scenario, but are positioned such that they are not seen by the ego vehicle on the road.

  • The scenarios with the scenario_LFACC_ prefix enable you to test lane detection and lane following algorithms with other vehicles on the road.

Examine the comments in each file for more details on the road and vehicles in each scenario. You can configure the LaneFollowingWithMonoCameraTestBench and workspace to simulate these scenarios using the laneFollowingWithMonoCameraSetup function.

For example, while learning about the effects of a camera-based lane detection algorithm on closed-loop control, it can be helpful to begin with a scenario that has a road but no vehicles. You could use the below code to configure the model and workspace for such a scenario.

laneFollowingWithMonoCameraSetup(...
    "scenario_LF_04_Curve_RightLane",...
    "VisionProcessingAlgorithm");

Conclusion

This example showed how to enable system-level simulation of controls and vision-based perception algorithms by integrating a control algorithm with a lane boundary and vehicle detection algorithm. You learned techniques to simulate and analyze the impact of control and perception algorithms on system performance using sample algorithms provided with this example. You can apply these same techniques to test your perception and control algorithms.

close(hFigLatResults)
close(hFigLongResults)
close(hVideoViewer)
bdclose("LaneFollowingWithMonoCameraTestBench")
clear hFigLatResults hFigLongResults hFigScenario hVideoViewer
clear logsout

See Also

Blocks

Related Topics