Visual SLAM is the process of calculating the position and orientation of a camera with respect to its surroundings while simultaneously mapping the environment. Developing a visual SLAM algorithm and evaluating its performance in varying conditions is a challenging task. One of the biggest challenges is generating the ground truth of the camera sensor, especially in outdoor environments. The use of simulation enables testing under a variety of scenarios and camera configurations while providing precise ground truth.
This example demonstrates the use of Unreal Engine® simulation to develop a visual SLAM algorithm for a UAV equipped with a stereo camera in a city block scenario. For more information about the implementation of the visual SLAM pipeline for a stereo camera , see the Stereo Visual Simultaneous Localization and Mapping example.
First, set up a scenario in the simulation environment that can be used to test the visual SLAM algorithm. Use a scene depicting a typical city block with a UAV as the vehicle under test.
Next, select a trajectory for the UAV to follow in the scene. You can follow the Select Waypoints for Unreal Engine Simulation (Automated Driving Toolbox) example to interactively select a sequence of waypoints and then use the
helperSelectSceneWaypoints function to generate a reference trajectory for the UAV. This example uses a recorded reference trajectory as shown below:
% Load reference path data = load('uavStereoSLAMData.mat'); pos = data.pos; % Position orientEuler = data.orientEuler; % Orientation
SLAMIn3DSimulation Simulink® model is configured with the US City Block scene using the Simulation 3D Scene Configuration (UAV Toolbox) block. The model places a UAV on the scene using the Simulation 3D UAV Vehicle (UAV Toolbox) block. A stereo camera consisting of two Simulation 3D Camera (UAV Toolbox) blocks is attached to the UAV. In the dialog box of the Simulation 3D Camera (UAV Toolbox) block, use the Mounting tab to adjust the placement of the camera. Use the Parameters tab to configure properties of the camera to simulate different cameras. To estimate the intrinsics of the stereo camera that you want to simulate, use the Using the Stereo Camera Calibrator App app.
% Stereo camera parameters focalLength = [1109, 1109]; % In pixels principalPoint = [640, 360]; % In pixels [x, y] imageSize = [720, 1280]; % In pixels [mrows, ncols] baseline = 0.5; % In meters % Open the model modelName = 'UAVVisualSLAMIn3DSimulation'; open_system(modelName);
The Helper Stereo Visual SLAM System block implements the stereo visual SLAM pipeline, consisting of the following steps:
Map Initialization: The pipeline starts by initializing the map of 3-D points from a pair of images generated from the stereo camera using the disparity map. The left image is stored as the first key frame.
Tracking: Once a map is initialized, for each new stereo pair, the pose of the camera is estimated by matching features in the left image to features in the last key frame. The estimated camera pose is refined by tracking the local map.
Local Mapping: If the current left image is identified as a key frame, new 3-D map points are computed from the disparity of the stereo pair. At this stage, bundle adjustment is used to minimize reprojection errors by adjusting the camera pose and 3-D points.
Loop Closure: Loops are detected for each key frame by comparing it against all previous key frames using the bag-of-features approach. Once a loop closure is detected, the pose graph is optimized to refine the camera poses of all the key frames.
For the implementation details of the algorithm, see the Stereo Visual Simultaneous Localization and Mapping example.
Simulate the model and visualize the results. The Video Viewer block displays the stereo image output. The Point Cloud Player displays the reconstructed 3-D map with the estimated camera trajectory.
if ~ispc error("Unreal Engine Simulation is supported only on Microsoft" + char(174) + " Windows" + char(174) + "."); end % Run simulation sim(modelName);
Loop edge added between keyframe: 5 and 356 Loop edge added between keyframe: 3 and 356 Loop edge added between keyframe: 4 and 358 Loop edge added between keyframe: 5 and 358 Loop edge added between keyframe: 6 and 358
Close the model.
 Mur-Artal, Raul, and Juan D. Tardós. "ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras." IEEE Transactions on Robotics 33, no. 5 (2017): 1255-1262.