Main Content

What Is Robot Hand-Eye Calibration?

Robot hand-eye calibration is the process of determining the relative position and orientation between a robot arm (the "hand") and that of a camera (the "eye"). This spatial transformation is crucial for visual servoing, applications where real-time visual feedback guides the motion of the robot, enabling it to interact accurately with objects in its environment. This involves continuously adjusting the movements of the robot based on the visual data received, ensuring precise execution of tasks such as picking and placing, sorting, and stacking.

Once the hand-eye calibration is complete, the robot can accurately translate the pixel locations of an object into the coordinate system of the robot. In the scenario portrayed in the image, the robot uses the transformation obtained from hand-eye calibration to estimate the position of a cube and move a gripper to accurately pick up the cube.

Side by side photos showing robot arm and gripper picking up a cube from atop a stack of books and placing the cube on the table. The cube shown on the stack in second photo, with the book "Robotics Vision and Control" at the top.

Robot Camera Configurations

You can configure the camera setup in the scene in two distinct ways. To effectively use a camera with a robot arm in a vision-guided robotic system, you must know the configuration of the robot camera within the scene. In the moving-camera configuration, the camera moves with the robot, providing a dynamic view of the environment. Conversely, in the stationary-camera configuration, the camera remains fixed on a stationary object, with the robot arm operating within the field of view of the camera.

  • Moving-camera configuration — The camera is mounted directly on the robot and moves with it. Also referred to as eye-in-hand configuration. This configuration is most common in tasks like object manipulation, assembly, and pick-and-place operations.

  • Stationary-camera configuration — The camera is positioned (and stationary) in the environment of the robot, separate from the arm. Also referred to as eye-in-hand configuration. This configuration most common in tasks like robotic surgery, autonomous driving, and surveillance.

Moving-Camera ConfigurationStationary-Camera Configuration

Calibration Procedure

To ensure precise robotic operations involving vision systems, you must perform hand-eye calibration. This process involves two key steps: first, calibrating the intrinsic parameters of the camera to correct distortions and accurately measure objects, and second, establishing the spatial relationship between the camera and the robot through hand-eye calibration. These steps are critical for integrating the camera into the robotic system, whether the camera is stationary or moving with the gripper.

Perform Intrinsic Camera Calibration

Before considering how a moving or stationary camera interacts with the robotic system, you must calibrate the camera itself. Specifically, this means you must estimate the intrinsic parameters of the camera. You can use these parameters to correct for lens distortion, measure the size of an object in world units, or determine the location of the camera in a scene. Once you calibrate the camera, you can place it in the scene, and then determine its spatial relationship to the robot gripper.

Intrinsic camera calibration consists of these steps:

Camera capturing checkerboard calibration patterns posed at various angles. All checkerboard calibration patterns are within the field of view of the camera. Additional workflow diagram: Prepare, Add Images, Calibrate, Evaluate, Improve, and Export.

The Camera Calibrator app incorporates a suite of functions to implement this procedure. For more information, see Using the Single Camera Calibrator App.

Perform Hand-Eye Calibration

To perform robot hand-eye calibration, place a calibration pattern in the robot workspace. These illustrations demonstrate the placement of the calibration pattern in each of the robot-camera configurations.

Moving-Camera CalibrationStationary-Camera Calibration

In the moving-camera (eye-in-hand) configuration, the camera moves with the gripper that is connected to the robot arm. The calibration pattern does not move, and is on a surface within the field of view of the camera.

For information on how to collect data to calibrate a moving-camera configuration, see the Estimate Pose of Moving Camera Mounted on a Robot example.

In the stationary-camera (eye-to-hand) configuration, the calibration pattern is mounted to and moves with the gripper connected to the robot arm. The camera is fixed to a surface and does not move. All poses of the calibration pattern should be in the field of view of the camera.

For information on how to collect data to calibrate a stationary-camera configuration, see thd Estimate Pose of Fixed Camera Relative to Robot Base (Robotics System Toolbox) example.

Hand-eye calibration includes these steps:

  1. Setup and data collection — Place a calibration pattern with known geometry, such as a checkerboard pattern, in different positions and orientations within the robot workspace. The robot moves its gripper to different poses, enabling the camera to capture images of the calibration pattern from various angles. Simultaneously, the robot records the pose of its gripper in its own coordinate system.

  2. Camera extrinsic parameters estimation — From each captured image, estimate the camera extrinsic parameters or the pose (position and orientation) of the calibration object with respect to the camera coordinate system.

  3. Robot gripper to base transformation — From each recorded gripper pose, compute the rigid transformation matrix that maps points from the robot gripper to the robot base coordinate system.

  4. Optimization — You can use an optimization algorithm to solve for the transformation matrix that best fits the collected calibration data, such as images of a calibration pattern at various positions. This process enables you to estimate the pose of the camera relative to the robot by determining the optimal transformation matrix. This method originated from a seminal paper by Tsai and Lenz [1]. Their approach is based on solving the equation AX = XB, where:

    • A is the transformation of the robot gripper from one position to the next.

    • B is the transformation of the camera or the calibration target (whichever is moving with the robot gripper) from one position to the next.

    • X is the unknown transformation.

    The approach uses the least squares method to solve the equation AX = XB. To obtain the intermediate transformations A and B, you must have information about the robot gripper pose and the camera pose at two different locations. In general, you can calibrate any sensors that can establish the relationship AX = XB using the hand-eye calibration method.

Best Practices and Troubleshooting

To achieve precise and reliable camera calibration for robotic applications, follow these best practices to ensure accurate camera intrinsic parameters and optimal performance.

  • Maintain constant camera focus — Ensure that the focus of the camera remains unchanged throughout the calibration process to achieve consistent results.

  • Accurate intrinsic camera calibration — Ensure that the images you are using were captured specifically for single calibration, following the guidelines outlined in Prepare Camera and Capture Images for Camera Calibration. Using images captured for multi-sensor calibration can lead to inaccurate camera intrinsic parameters.

  • Ensure diverse calibration data — Gather comprehensive calibration data that includes a diverse range of robot poses. This data should cover various camera angles and joint positions to ensure thorough calibration.

  • Evaluate calibration accuracy — To evaluate the calibration accuracy, visualize the locations of the robot base, gripper, camera, and calibration board within the same 3-D space.

References

[1] Tsai, R.Y., and R.K. Lenz. "A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration." IEEE Transactions on Robotics and Automation 5, no. 3 (June 1989): 345—58. https://doi.org/10.1109/70.34770

See Also

Apps

Functions

Topics