Feature Matching and Tracking | Making Vehicles and Robots See: Getting Started with Perception for Students - MATLAB
Video Player is loading.
Current Time 0:00
Duration 7:34
Loaded: 3.67%
Stream Type LIVE
Remaining Time 7:34
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 7:34

      Feature Matching and Tracking | Making Vehicles and Robots See: Getting Started with Perception for Students

      From the series: Making Vehicles and Robots See: Getting Started with Perception for Students

      Learn how to use feature-based tracking to identify predefined objects in a video. Start with exploring what image features are and then how to locate an object of interest in a video frame using feature detection, description, and matching. See how the Registration Estimator app helps you select detectors and descriptors depending on the data. Extend the feature matching technique to various frames in the input video or track the object using point tracking which is often a more reliable technique for object tracking. Follow the exercise on recovering a rotated video with the matching features framework in Simulink® provided at the end of the video.

      Published: 4 Dec 2021

      When you design an autonomous system, making your system understand what it has seen and how the object is moving are crucial, so that the system can take reasonable actions. For instance, a simple task could be designing a robot with a camera and making the robot track an object using the video captured by the camera. For identifying the object by color, you can use color-based image segmentation covered in the last video. However, if the object has a visual texture, a technique called Image Features can help.

      Today, you will learn feature-based tracking of a predefined object in a video. First, you will explore Image Features and then learn how to locate an object with a region of interest. We will do this using feature detection, description, and matching. Finally, you will see how to track the object represented by feature points in the IOR.

      Before addressing feature matching, let's understand what Image Features are. Image Features are distinct patterns found in the image, such as blobs, corners, and uniform intensity regions. They must be something that differs significantly from their immediate surroundings by texture, color, or intensity to help uniquely identify parts of the image.

      To begin with, you will first read a video in MATLAB. A video is a set of image frames captured one after the other. You will work initially with just one frame from the video.

      Let's see frame number 250 using the video reader object. To track the object, you first need to locate it by, for instance, manually marking it with a region of interest, also called IOI. Sometimes, you may be lucky to have a pre-captured image of the object. Here, you can use the image as a reference, find its features, and further identify the object with the feature matching.

      Let's go start with finding features that use blobs. Let's see this one. When you plot the output, you see the location of the detector feature points, as well as associated circle indicating the scale of the feature.

      The next task is to distinguish a feature from the rest of the image. You can use extract the features function to get representation for each point, and it's immediately surrounding pixels known as descriptor. Once you have the descriptors, the next task is to find the IOI that has The MathWorks logo, but to do this, you need a reference image to match the features and know the location of the logo. So you use the same detector and a descriptor to get features, match feature's function, and then displaying them.

      You see here that there are some roaming matched features called outliers. To eliminate them, you use the Estimate Geometric Transform 2D function that specifically provides us the index of correctly matched feature points called inliers. And to get the IOI, this variable T will help you with the necessary information. You then loop the video by using read frame to get the next frame, detect, extract, and the matching features and round the code.

      You see the location where the logo is detected. There are various types of detectors and descriptors. Some are scale independent, while some are not. Selecting a method that's scale independent, it's necessary to detect the feature at various zoomed levels. Image registration estimator app in MATLAB provides an interactive way to test different methods to parameters of a selected method and eventually generate MATLAB codes.

      After successfully finding the inliers, the next step is to track this object in the rest of the video. You can extend the same tactic of matching features on every frame. However, you can see that the inliers change across frames, and IOI is sometimes inaccurate.

      A more stable way is to use the point tracker technique to track the object represented by features. You can visualize a point tracker with inliers, and then move the rest of the frames where the point tracker can automatically find the new location of the inliers. Results should point tracker also works when the logo moves in zoomed in or zoomed out frames.

      In case you do not have a pre-captured image of the object of interest, or the reference image does not have a good quality, you can mark the region of interest manually in a video frame. You then initialize the point tracker with the features detected in the IOI and around the tracker for the rest of the frames. I few points to note here are that feature-based tracking is sensitive towards 19 variation, out-of-plane rotation, or articulated motion. Also, to track the object over a long period of time, you need to periodically reacquire points to reset initial points of the tracker.

      Moreover, the tracker may lose the object, if the object of interest is obstructed from the camera view. In such situations, you can consider using Kalman filters to help predict object locations. You can check out more about Kalman filters in the description. Before we close, we have a small exercise The exercise is to recover our rotating video based on the feature detection, description, and matching framework using Simulink, and you need to add the code missing in the blocks.

      A short summary, you learned how to perform feature-based object tracking for our video. We discussed deciding the detector and the descriptor depends on type of the data, and that image registration estimator app can help test the different methods. If you meet any questions related to the video or the exercise, please, reach out to us at roboticsarena@mathworks.com. Thank you again for watching this video, and see you in the next one.

      Up Next:

      View full series (5 Videos)

      View more related videos