Main Content

Estimate camera projection matrix from world-to-image point correspondences

returns the camera projection matrix determined from known world points and their
corresponding image projections by using the direct linear transformation (DLT)
approach.`camMatrix`

= estimateCameraMatrix(`imagePoints`

,`worldPoints`

)

`[`

also returns the reprojection error that quantifies the accuracy of the projected image
coordinates.`camMatrix`

,`reprojectionErrors`

] = estimateCameraMatrix(`imagePoints`

,`worldPoints`

)

You can use the `estimateCameraMatrix`

function to estimate a camera projection
matrix:

If the world-to-image point correspondences are known, and the camera intrinsics and extrinsics parameters are not known, you can use the

`cameraMatrix`

function.To compute 2-D image points from 3-D world points, refer to the equations in

`camMatrix`

.For use with the

`findNearestNeighbors`

object function of the`pointCloud`

object. The use of a camera projection matrix speeds up the nearest neighbors search in a point cloud generated by an RGB-D sensor, such as Microsoft^{®}Kinect^{®}.

Given the world points * X* and the image
points

λ* x* =

The equation is solved using the direct linear transformation (DLT) approach [1]. This approach formulates a homogeneous linear system of equations, and the solution is obtained through generalized eigenvalue decomposition.

Because the image point coordinates are given in pixel values, the approach for computing the camera projection matrix is sensitive to numerical errors. To avoid numerical errors, the input image point coordinates are normalized, so that their centroid is at the origin. Also, the root mean squared distance of the image points from the origin is $$\sqrt{2}$$. These steps summarize the process for estimating the camera projection matrix.

Normalize the input image point coordinates with transform

*T*.Estimate camera projection matrix

*C*from the normalized input image points.^{N}Compute the denormalized camera projection matrix

*C*as*C*^{N}*T*^{-1}.Compute the reprojected image point coordinates

as**x**^{E}*C*.**X**Compute the reprojection errors as

*reprojectionErrors*= |−**x**|.**x**^{E}

[1] Richard, H. and A. Zisserman.
*Multiple View Geometry in Computer Vision*. Cambridge: Cambridge
University Press, 2000.

`stereoParameters`

|`cameraCalibrationErrors`

|`intrinsicsEstimationErrors`

|`extrinsicsEstimationErrors`

|`cameraIntrinsics`

`estimateCameraParameters`

|`showReprojectionErrors`

|`showExtrinsics`

|`undistortImage`

|`detectCheckerboardPoints`

|`generateCheckerboardPoints`

|`cameraMatrix`

|`estimateWorldCameraPose`

|`estimateEssentialMatrix`

|`estimateFundamentalMatrix`

|`findNearestNeighbors`

- Evaluating the Accuracy of Single Camera Calibration
- Structure From Motion From Two Views
- Structure From Motion From Multiple Views
- Depth Estimation From Stereo Video
- Code Generation for Depth Estimation From Stereo Video
- What Is Camera Calibration?
- Using the Single Camera Calibrator App
- Using the Stereo Camera Calibrator App