Developing Multimodal SLAM Applications Using 3D Simulations
Ruibai Li, Toyota Motor Corporation
Ryoma Kakimi, Toyota Motor Corporation
Simultaneous localization and mapping (SLAM) is a technology that uses sensors such as lidar and cameras to accurately track the position of mobility systems, which is crucial for the development of autonomous driving. The accuracy of SLAM is often influenced by the surrounding environment, necessitating testing under various conditions within a simulation environment to ensure system robustness.
In this session, we introduce the combined use of real-world measurement results and simulation data in the study of SLAM and sensor fusion at Toyota. To realize this simulation environment, we focused on simulating the surrounding environment, sensor models, and mobility movement. Initially, we created a virtual 3D environment and generated synthetic sensor data including lidar, cameras, and IMUs using Simulink®. We compared the results from real-world measurements with the synthetic data to adjust the parameters of the sensor models.
Subsequently, we deployed a small mobility model equipped with the necessary sensors and collected data within the virtual 3D environment rendered by Unreal Engine. By applying SLAM to these sensor outputs, we verified the correlation between real-world data and synthetic data in terms of their impact on localization.
Finally, we discuss the utilization of MATLAB® and Simulink for multimodal sensor fusion and SLAM tasks. This requires the integration of various sensor outputs with different data types, dispersions, and frequencies. The combination of real-world sensor data and simulation is effective for the development of advanced and complex algorithms that integrate multiple sensors, not just SLAM. We believe that this exploration will accelerate the development of robots and small mobility systems.
Published: 6 Nov 2024