Hdl black and white image blob analysis

3 views (last 30 days)
Hi, I want to make blob analysis on tihs image using hdl toolkit or vision hdl toolkit. How can I make this?

Accepted Answer

Brian Ogilvie
Brian Ogilvie on 15 Jan 2019
Hello Ismail,
Traditional blob analysis is not efficient on FPGAs due to the multipass nature of the algorithm. For real-time video, I would instead recommend an approach similar to the one in the example "Pothole Detection" in Vision HDL Toolbox.
If you look at that model, you will see it uses several preprocessing steps: bilateral filtering, sobel edge detection, masking, and morphological closing before trying to find the objects. Then in the block called Centroid31, you will see that a 31x31 region of the binary image is used and the centroid and total active area are calculated. The following block, DetectAndHold, uses the area metric to decide if the 31x31 region is both above the user's threshold and above the previously found maximum area. In this way, this example finds the largest single area above the user's threshold. Then the X,Y location from the centroid are send to another block that displays a marker and text on the output video.
For your application you will want to reset the maximum found value from the area metric when the detected area is low or some period of time or number of video lines in between blobs. Another approach would be use non-maximal supression on the calculated area which is shown in the example "FAST Corner Detection." This will remove most, but not all of the over-detected areas.
  2 Comments
ismail çelik
ismail çelik on 24 Jan 2019
Edited: ismail çelik on 24 Jan 2019
Hello Mr. Brian.
Firstly, the example you suggested was very useful for me.
I adapted this example to my own practice.
But I had a few questions.
1. How to determine 31x31 pixel value.
2. I changed my application to 17x17 pixel. There is no problem in detecting only one white area. But when I change it to identify all white areas, it doesn't draw the square of 17x17 pixels properly. I added pictures.
In the meantime, I have to specify the number of white areas in each frame in my application.
Brian Ogilvie
Brian Ogilvie on 24 Jan 2019
Edited: Brian Ogilvie on 25 Jan 2019
Hi Ismail,
I am not exactly sure what is wrong in your first picture with multiple white areas. It could be that the centriod calculation is not correct, or it could be that drawing of the marker is going wrong.
I think the thing that must be wrong here is the timing of the centroid values relative to the video stream for overlay. You can see that the bottom line of the marker is missing or torn, indicating that something in the timing for that marker was not correct.
There is a Pixel Stream Aligner block in original demo that tries to delay the incoming video stream to allow for all the processing time needed for preprocessing and finding the areas and centroids, but it was assuming only one detected area would be present. I would imagine that instead of a single X,Y pair being sent to the marker overlay, you would need registers or a RAM to hold all the X,Y values and compare the current H and V position to all of the possible white areas.
Additionally, if you changed the detection area to 17x17, you should also change the centroid calculation to use -8:8 for the relative position computation of both the X and Y axes.
Can you log the centroid values to the MATLAB workspace and see if they are correct?
If you contact MathWorks support and send them your model, I can have a look at it as well.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!