Overview of Scenario Generation from Recorded Sensor Data
The Scenario Builder for Automated Driving Toolbox™ support package enables you to create virtual driving scenarios from vehicle data recorded using various sensors, such as a global positioning system (GPS), inertial measurement unit (IMU), camera, and lidar. To create virtual driving scenarios, you can use raw sensor data as well as recorded actor track lists or lane detections. Using these virtual driving scenarios, you can mimic real-world driving conditions and evaluate autonomous driving systems in a simulation environment.
Scenario generation from recorded sensor data involves these steps:
Preprocess input data.
Extract ego vehicle information.
Extract scene information.
Extract information of non-ego actors.
Create, simulate, and export scenario.
Preprocess Input Data
Scenario Builder for Automated Driving Toolbox supports a variety of sensor data. You can load recorded data from GPS, IMU, camera, or lidar sensors into MATLAB®. You can also load processed lane detections and actor track list data to create a virtual scenario.
After loading the data into MATLAB, you can perform these preprocessing steps on sensor data:
Align the recorded timestamp range of different sensors by cropping their data into a common timestamp range. The Scenario Builder for Automated Driving Toolbox support package supports timestamp values in the POSIX® format.
Normalize and convert the timestamp values into units of seconds.
Organize the sensor data into formats that Scenario Builder for Automated Driving Toolbox supports. For more information, see Preprocess Lane Detections for Scenario Generation.
Specify the region of interest (ROI) in the GPS data for which you want to create a scenario. Use the
getMapROI
function to get the coordinates of a geographic bounding box from the GPS data. To visualize geographic data, use thegeoplayer
object.Convert geographic coordinates to local Cartesian coordinates using the
latlon2local
function.
Extract Ego Vehicle Information
The local Cartesian coordinates that you obtain from the latlon2local
function specify the ego waypoints. Because these waypoints are directly extracted from
raw GPS data, they often suffer from GPS noise due to multipath propagation. You can
smooth this data to remove noise and better localize the ego vehicle. For more
information on smoothing GPS data, see Smooth GPS Waypoints for Ego Localization. Then, generate
the ego trajectory from the waypoints and the corresponding time information using the
waypointTrajectory
(Sensor Fusion and Tracking Toolbox)
System object™.
To improve road-level localization of the ego vehicle, you can fuse the information from GPS and IMU sensors. For more information, see Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation. To get lane-level localization of the ego vehicle, you can use lane detections and HD map data. For more information, see Ego Localization Using Lane Detections and HD Map for Scenario Generation.
Extract Scene Information
To extract scene information, you must have road parameters and lane information. Use the
roadprops
function to extract road parameters from the desired
geographic ROI. You can extract road parameters from these sources:
The function extracts parameters for any road within the ROI. To generate a scenario, you need
only the roads on which the ego vehicle is traveling. Use the selectActorRoads
function to get the ego-specific roads.
Standard definition (SD) map data often lacks detailed lane information, which is essential
for navigation in an autonomous system. You can use the updateLaneSpec
function to get detailed lane information from recorded
lane detections. To extract lane information from raw camera data, see Extract Lane Information from Recorded Camera Data for Scene Generation.
You can also create a scene from recorded lidar data. For more information, see Generate RoadRunner Scene from Recorded Lidar Data.
Extract Non-Ego Actor Information
After extracting ego information and road parameters, you must use non-ego actor information
to create a driving scenario. Use the actorprops
function to extract non-ego actor parameters from the actor
track list data. The function extracts various non-ego parameters, including speed, yaw,
and entry and exit times. For more information, see the actorprops
function.
To extract an actor track list from camera data, see Extract Vehicle Track List from Recorded Camera Data for Scenario Generation. You can also extract a vehicle track list from recorded lidar data. For more information, see Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation.
Create, Simulate, and Export Scenario
Create a driving scenario using a drivingScenario
object. Use this object to
add a road network and specify actors and their trajectories from your extracted
parameters. For more information on how to create and simulate scenario, see Generate Scenario from Actor Track List and GPS Data.
You can export the generated scenario to the ASAM OpenSCENARIO® file format using the export
function of the drivingScenario
object.
Using a roadrunnerHDMap
object, you can also create a RoadRunner HD Map from road network data that you have updated using lane detections.
The RoadRunner HD Map enables you to build a RoadRunner scene. For more information, see the Generate RoadRunner Scene from Recorded Lidar Data example.
You can export actor trajectories to CSV files, and generate RoadRunner scenario by importing CSV trajectories into RoadRunner Scenario. For more information, see Generate RoadRunner Scenario from Recorded Sensor Data.
You can create multiple variations of a generated scenario to perform additional testing of automated driving functionalities. For more information, see Overview of Scenario Variant Generation.
See Also
Functions
Objects
Related Topics
- Smooth GPS Waypoints for Ego Localization
- Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation
- Ego Localization Using Lane Detections and HD Map for Scenario Generation
- Preprocess Lane Detections for Scenario Generation
- Extract Lane Information from Recorded Camera Data for Scene Generation
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Generate Scenario from Actor Track List and GPS Data
- Generate RoadRunner Scenario from Recorded Sensor Data
1 You need to enter into a separate agreement with HERE in order to gain access to the HDLM services and to get the required credentials (access_key_id and access_key_secret) for using the HERE Service.
2 To gain access to the Zenrin Japan Map API 3.0 (Itsumo NAVI API 3.0) service and get the required credentials (a client ID and secret key), you must enter into a separate agreement with ZENRIN DataCom CO., LTD.