How to use vision.PointTracker with ImageLabeler?

10 views (last 30 days)
craq
craq on 3 Jul 2018
Commented: craq on 9 Jul 2018
There is an excellent tutorial on how to use the point tracker with the Ground Truth Labeler app. https://au.mathworks.com/videos/ground-truth-labeler-app-1529300803691.html
Unfortunately, I don't have access to the Automated Driving Toolbox, but I do have access to the image processing toolbox. The image processing toolbox includes both the ImageLabeler app and a point tracker algorithm. So I think it should be possible to implement the same functionality. I have tried comparing the vision.PointTracker class to the template that comes up when I click "create new algorithm" in the ImageLabeler app, but I am having trouble understanding how to make them work together. If there is a tutorial I have overlooked, please point me in the right direction. If not, a brief explanation would be much appreciated.
  2 Comments
craq
craq on 4 Jul 2018
Yes I want to automate that labelling, and use those labels as ground truth to train a machine learning algorithm.
I have several series of images which are effectively frames from a movie. It would help me a lot if I could use a point tracker to localise an ROI in one image based on its position in the previous image. Is that what the vision.PointTracker class does?

Sign in to comment.

Answers (1)

Florian Morsch
Florian Morsch on 5 Jul 2018
Edited: Florian Morsch on 5 Jul 2018
The vision.PointTracker does, as the name implies, tracks points (with KLT-algorithm).
But to track those points you first have to find them. This is mostly done with a object detector for example. Now what you are aiming for is to find a specific object in each frame and label it. If its something simple like people or faces you could try a already trained cascade object detector (MATLAB has some pre-trained variants of it).
If you want to detect a more special object you are better off if you
a.) write a algorithm for detection on your own (if you only want to detect white colored objects for example you can search for only white pixels, or if you want to detect a cube you can search for it with edge detection)
b.) label it yourself. Depending on how many pictures you have for training it might be faster to label them yourself instead of writing the algorithm and then check each picture if its labeled correctly.
The vision.PointTracker itselfs cant detect anything. It needs points you give it which it can then track.
Now if you are able to find your first object and get enough points, then the point tracker can follow those points over multiple images. So basically yes, you can use a point tracker to follow points over multiple images. But you have to make sure that you give it enough points to follow (id recommend 10 or more) and after you have processes all images you still should check if the labeling is done correctly.
  3 Comments
craq
craq on 9 Jul 2018
Thanks for those ideas. I think I will be able to adapt the point tracker or the feature matching to help label my images. It's a shame that it doesn't work with the ImageLabeler app, but at least you've given me an idea of how to get the result I'm looking for.
By the way, conventional algorithms to detect these objects have a low success rate for certain specific conditions (lighting, background etc). I am aware that deep learning networks generalise quite well, but my understanding is that they don't extrapolate well outside their training data. I assume that if I don't feed it with enough examples of the difficult conditions, then it will only know about the "easy" examples.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!