Scenario Harvesting Using Automated Driving Toolbox and RoadRunner Scenario - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 13:09
Loaded: 1.26%
Stream Type LIVE
Remaining Time 13:09
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 13:09

    Scenario Harvesting Using Automated Driving Toolbox and RoadRunner Scenario

    Krishna Koravadi , APTIV

    Scenario harvesting is a workflow for generating digital twin data from vehicle logs utilizing Automated Driving Toolbox™ and RoadRunner.

    This workflow consists of multiple data processing tasks, including:

    • GPS and IMU sensor fusion to generate ego trajectory

    • Ego localization using lane detections and a high-definition map 

    • Target vehicle trajectory creation from recorded lidar, radar, and vision sensors data

    RoadRunner Scenario exports the scenarios created to OpenSCENARIO, which is used for regression testing of the advanced driver-assistance systems (ADAS) and autonomous driving (AD) algorithms.

    Published: 26 May 2023

    Hello, everyone. Welcome to join this session at MathWorks' Automotive Conference. It's great to see you all in person. I am Seo-Wook Park, an application engineer for [INAUDIBLE] and the automated driving from MathWorks.

    Hi, my name is Krishna Koravadi, engineering manager at APRIV. Today, Seo-Wook and I will be talking about Scenario Harvesting from Recorded Sensor Data using Automated Driving Toolbox and RoadRunner Scenario. We developed the solution in collaboration with MathWorks.

    Today, we talk about motivation; what is scenario harvesting; create ego trajectory using sensor fusion and localization; create target trajectories from onboard sensors; export OpenSCENARIO from RoadRunner Scenario; validate RoadRunner Scenario and OpenSCENARIO; key takeaways; and Q&A.

    Motivation-- to generate the digital twin of interesting real-world scenarios; use the digital twin for closed loop algorithm regression testing; evaluate the performance of features, functions along before the software is released; use it in the CI/CD pipeline to ensure the software quality.

    What is scenario harvesting? Here is all the recorded sensor data from the test vehicle drives, or fed into the scenario harvesting solution. And the scenario harvesting solution will output the OpenSCENARIO. And this OpenSCENARIO is utilized for the ADAS/AD software validation.

    Here is a block diagram, which shows how this data is used for the ADAS/AD software validation. Errors are [INAUDIBLE] software under test is integrated into the Simulink Docker, which also contains vehicle dynamics and sensor model. And the OpenSCENARIO generated from the scenario harvesting along with the scenario variations are fed into the Docker. And the tests are controlled using Simulink test.

    So before we deep dive into the details, I would like to give a big over view. All the sensors from the vehicle, which is a GPS, laneData, and the other sensors like LiDAR and radar data, is used to generate-- first, we generate the ego trajectory utilizing the GPS IMU data and the laneData from the camera, localized using the map data. And second, we generate non-ego trajectories utilizing the other sensor data like camera, radar, LiDAR data.

    This schematic diagram looks too complicated to understand. I'd like to explain the overall workflow in four steps-- ego trajectory, non-ego trajectory, scenario generation, and scenario validation.

    Let's begin with the first step, generating an ego trajectory. Ego typically means a host vehicle. The scenario consists of a static scene and dynamic actors. First, we created a scene from GPS route and HD map using the RoadRunner Scene Build tool. After removing unwanted road, we can build and save the scene in the RoadRunner HD map format.

    We can overlay the ego trajectory onto the scene. We use GPS and INS sensor to estimate ego trajectory more accurately. We use INS filter asynchronous motor function from sensor fusion and tracking toolbox. As you can see, the raw GPS route has a rest sample rate and does not provide lane level accuracies. Through the GPS IMU fusion, we improved the accuracy of the ego trajectory with a higher sample rate.

    We want to improve the ego even more using lane detection. The camera tells a ground truth lane number where the ego vehicle is actually located. The vision camera detects a lane marking for ego and adjacent lanes. We conduct the ego localization using lane detection and road map to improve the ego trajectory more.

    The HD map provides the lane information. Then we overlay the fused ego trajectory on the map. Lane tracking tells a ground truth lane number at current time. Then we move the ego trajectory laterally to the ground truth lane. Finally, we localize the ego trajectory using lateral offset distance from the lane centers.

    This video shows the ego trajectory before and after localization by lane detection and road map. After GPS, and IMU fusion, and localization, we write the ego trajectory as a CSV file. The RoadRunner imports CSV trajectory to create a scenario. We run a simulation with the RoadRunner scenario to compare ego trajectory before and after the localization.

    We see the improvement of the localized ego trajectory.

    Seo-Wook just showed you how to create an ego trajectory. Next step is to add a non-ego vehicle trajectories. We tried several different sensors such as camera, radar, LiDAR to generate a non-ego vehicle trajectories. Let me explain how we used LiDAR sensor to generate non-ego trajectories using deep learning network.

    The first step process is we used PV-RCNN to read the radar data to generate 3D bounding boxes. And we used JIPDA tracker to create a vehicle track list. This video shows the LiDAR tracks compared to the windshield camera. The red bounding boxes shows the detections and the green represents vehicle tracks. And the LiDAR detection is limited to 60 meters since we used the pre-trained detectors.

    We also used onboard radar sensors to generate non-ego vehicle trajectories. We run a series of signal processing for the LiDAR sensor such as classification of dynamic versus static detection, guardrail detector, ghost removal, and clustering.

    This plot shows that data signal processing, actually identifies the static versus dynamic detection, guardrail, and clusters, the clusters the detection feeds into the trackers. We use a non-causal JIPDA tracker. I called it non-causal because this tracker used a future measurement to improve the data association. It is an offline, multi-object tracker. It solved ambiguity more efficiently than an online tracker. It reduce false track and generated smooth track.

    Sometimes online tracker gets confused at ambiguous event. That caused track switching. The online tracker can handle ambiguity more efficiently. The smooth JIPDA tracker runs an iterative data association at each step until final step. First, it runs forward tracking using fast measurement. And then it performs backward tracking using future measurements. It fused forward and backward predictions. Then it associated the fused prediction with new detection. It repeats this step over and over again until final step reached.

    Finally, it performed RTS smoothing. This video shows offline LiDAR track compared to the windshield camera. But sometimes LiDAR detection has pulse track. LiDAR only track is unavoidable for this type of forced track.

    So we used the fusion algorithm with onboard sensors such as camera and radar sensors to generate non-ego vehicle trajectories. Here is the comparison of LiDAR, radar, and fusion tracks. The blue boxes represents the LiDAR tracks. And the red box represents the radar tracks. And the black represents the fusion tracks. And the detected lines are shown in the red. The video shows all three tracks compared to the windshield.

    You can see that the radar and the fusion tracks are well-matched except one false tract from the radar. Most of our vehicles are logged using onboard sensors only, such as radars, cameras, and vehicle GPS. The LiDAR sensors is used in a special instrument vehicles used for the benchmarking purposes only. So we plan to use the offline radar tracker along with the camera and map fusion for scenario harvesting.

    Well we just generated the non-ego trajectory. Now we want to combine the ego and non-ego vehicle to generate a complete scenario. Non-ego vehicle trajectories are defined in ego coordinate. So we need to convert non-ego trajectory to the [INAUDIBLE] coordinate. We use a MATLAB function called active props for the coordinate transformation.

    The Automated Driving Toolbox provides scenario-builder support packages for these MATLAB functions. This MATLAB function generates a target project in the world to coordinate from track list and ego trajectory. The inputs are ego trajectory in the world coordinate and target track list in ego coordinate.

    The actorprops generate the actor trajectory in the world coordinate by coordinate transformation. We create a target trajectory in a CSV file and RoadRunner scenario imports the CSV trajectory. This video shows how the RoadRunner scenario imports a CSV trajectory. Now, we run a simulation with the RoadRunner scenario. The blue car is the ego car. In this example, we use a fusion track to create a non-ego vehicle trajectory.

    Seo-Wook just explained how to create scenarios. Now we would like to validate the generated scenario to see if this correctly represents the real-world scenario. Using a RoadRunner scenario, we export the scenario to ASAM OpenSCENARIO format. We used Esmini to validate the exported OpenSCENARIO from the RoadRunner scenario. And Esmini is an open source, OpenSCENARIO player. As you can see, the two scenarios are matching well.

    We also validated the RoadRunner scenario by comparing the windshield along with the windshield camera. As you can see, equal lane change matches with each other. And also, the cutting vehicle also matches to each other.

    Let me recap what we presented today. From the recorded sensor data, such as GPS, IMU, is used to generate the ego trajectory. And from the sensors such as camera, LiDAR, is used to generate the non ego trajectory. And then we run these two into the RoadRunner scenario to generate the ASAM OpenSCENARIO file.

    Here is a summary of the key takeaways. We could successfully generate a digital twin data from the vehicle logs utilizing Automated Driving Toolbox and RoadRunner. Non-causal tracker helps us to generate a high accurate digital twin reference data utilizing MATLAB Toolbox. Any standard simulation tools could be used with this exported data. Digital twin data generated utilizing the real-world scenarios is very useful for the regression testing of ADAS/AD algorithms.

    If you want to learn about new features for scenario generation, you can check these reference examples from several MathWorks toolboxes. There is another example here. You can check this out. Thanks for your attention.

    Thank you much.