Community Profile

photo

Qu Cao

MathWorks

Last seen: 2 days ago Active since 2016

I'm an Automated Driving and Mapping Engineer at MathWorks and a Mechanical Engineer by education.

DISCLAIMER: Any advice or opinions posted here are my own, and in no way reflect that of MathWorks.

Statistics

  • 3 Month Streak
  • Revival Level 3
  • Knowledgeable Level 2
  • First Answer

View badges

Content Feed

View by

Answered
About "slam" on my camera device
The example shows how to run stereo visual SLAM using recorded data. It doesn't support "online" visual SLAM yet, meaning that y...

3 days ago | 0

Answered
Is Unreal Engine of the Automated Driving Toolbox available on Ubuntu?
As of R2021a, only Windows is supported. See Unreal Engine Simulation Environment Requirements and Limitations.

2 months ago | 0

Answered
why we use Unreal engine when there is a 3D visualization available in Automated driving toolbox?
It's not just used for visualization. With Unreal, you can configure prebuilt scenes, place and move vehicles within the scene, ...

2 months ago | 0

| accepted

Answered
About running a stereo camera calibrator
In general, you can use any type of stereo camera and calibrate its intrinsic parameters using the Stereo Camera Calibrator. You...

3 months ago | 0

Answered
How to obtain optimal path between start and goal pose using pathPlannerRRT() and plan()?
Please set the random seed at the beginning to get consistent results across different runs: https://www.mathworks.com/help/mat...

4 months ago | 0

| accepted

Answered
Does vehicleCostmap this type of map only support pathPlannerRRT object to plan a path? Can I use another algorithm to plan a path?
You can create an occupancyMap object from a vehicleCostmap object using the following syntax: map = occupancyMap(p,resolution)...

5 months ago | 0

Answered
Defining a ROI for feature extraction rather than rectangle
Unfortunately, rectangle is the only type of ROI supported. As a workaround, you can define multiple ROIs in your image to cover...

5 months ago | 1

Answered
Monocular Visual Simultaneous Localization and Mapping Error: Dot indexing is not supported for variables of this type.
The example has been updated over the past few releases. For 20b version, please check the following documentation: https://www...

5 months ago | 0

| accepted

Answered
How can I store the feature descriptors for all 3D points found in Structure from Motion?
You can use imageviewset to store the feature points associated with each view and the connections between the views. You can al...

6 months ago | 0

Answered
How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views?
triangulateMultiview requires both camera poses and intrinsic parameters inputs to compute the 3-D world positions corresponding...

6 months ago | 0

| accepted

Answered
imageviewset() not returning an imageviewset object
imageviewset is introduced in R2020a. If you are not able to upgrade to 20a, you can use viewSet as a workaround.

11 months ago | 0

Answered
How to ensure that the number of matches between 2 images is equal to the number given?
You can set 'MatchThreshold' to 100 and 'MaxRatio' to 1.

11 months ago | 0

| accepted

Answered
Undefined function 'estimateGeomerticTransform' for input arguments of type 'SURFPoints'.
There is a typo in your code, estimateGeomerticTransform should be estimateGeometricTransform.

11 months ago | 0

Answered
How do I find 3D coordinates from stereo picture pair?
You can use reconstructScene function to compute the 3-D world points from a disparity map. Then, you can query the 3-D coordina...

1 year ago | 0

Answered
RRT navigation toolbox and automated driving toolbox also costmap from driving scenario
1) plannerRRT in Navigation Toolbox is a generic motion planner where you can define the state space. pathPlannerRRT in Automate...

1 year ago | 0

Answered
helperTrackLocalMap error with Monocular SLAM
Answer pasted from comments: Please try tunning the parameters to see if it helps improve the robustness: In helperIsKeyFrame,...

1 year ago | 0

Answered
Why is the pointsToWorld back-projection inverted?
You may need to convert the camera world pose to extrinsics using cameraPoseToExtrinsics: [worldOri,worldLoc] = estimateWorld...

1 year ago | 2

| accepted

Answered
rectangle instead of square for camera calibration
Unfortunately, this function does not support rectangle patterns.

1 year ago | 0

Answered
How to calculate fisheye intrinsics?
You can use undistortFisheyeImage function to produce a "virtual perspective" camera intrinsics, which is the format you need. S...

1 year ago | 1

| accepted

Answered
How can I find a diameter of cylindrical 3D point cloud in MATLAB?
https://www.mathworks.com/help/vision/ref/pcfitcylinder.html

1 year ago | 0

Answered
Function estimateWorldCameraPose() or extrinsics() for fisheyeParameters is missing. Is it possible to change these functions for fisheye?
fisheyeIntrinsics can't be used directly. Instead, you can use undistortFisheyeImage function to produce both undistorted image ...

1 year ago | 2

| accepted

Answered
Unit of worldpoints returned by triangulate function
The unit is determined by the input stereoParams. The default world unit is mm: https://www.mathworks.com/help/vision/ref/stere...

1 year ago | 1

| accepted

Answered
How to Transform of coordinate of actors from vehicle coordinate to world coordinate or refference coordinate system (ADAS toolbox) ...?
The following documenation page explains the vehicle coordinate system and the world coordinate system: https://www.mathworks.c...

1 year ago | 0

| accepted

Answered
3D Vision Toolbox - DisparityRange - what limits the range to 128 and why?
This is due to the implementation that utilizes parallel programming on multi-core processors. The following post might be hel...

2 years ago | 0

Answered
How to find union of two SURFPOINTS variables?
surf3 = union(surf1, surf2, 'rows');

2 years ago | 0

Answered
Create a cameraParameters object with more than one Intristic Matrix?
Since R2019b, you can use an array of cameraIntrinsics objects to represent the intrinsic parameters of a bunch of different cam...

2 years ago | 0

Answered
How can i compute depth maps from disparities maps of stereo pairs captured with a calibrated camera ?
You can use disparityBM or disparitySGM to compute a depth map.

2 years ago | 0

Answered
Camera looking down the negative Z-Axis
Hi Alex, In Computer Vision Toolbox we use the y-down/z-forward camera coordinate systems: https://www.mathworks.com/help/visi...

2 years ago | 0

Answered
Camera Pose Estimation from Non-calibrated camera images
Unfortunately, currently all the functions used in the Structure-from-Motion workflow require the camera to be calibrated. You c...

2 years ago | 0

Answered
How to limit ORB features?
Hi Helia, the selectStrongest function picks the points based on the Metric values: https://www.mathworks.com/help/vision/ref/o...

2 years ago | 1

| accepted

Load more