How to do feature extraction from an image?

Hi I want to do feature extraction from an image. I read a paper and did this steps: I did image segmentation. Then I want to do feature extraction. In this paper:
Segmented lungs were divided into 3*3 windows in which all nine pixels were located in the lung mask. Window size selection is a compromise between higher resolution (in the classification process) and faster algorithm. Smaller windows (i.e. 1*1 or 2*2) have the problem of more time complexity for training and increaseing the number of FP. Larger windows (i.e. 5* 5 or larger) cause lower resolution of reconstructed image after classification and miss some tiny nodules. Thus, for better resolution and faster algorithm, simultaneously, we used a 3*3 window. In the training process, these windows were labeled as nodule (þ1) and non-nodule (1).
My question is this: Is there any standard criteria to lable the 3*3 window as a noudle? ( I mean if how many of these pixcles are 1, we should lable the window as a noudle?)</pre>

12 Comments

Questions like this are very specific to a field. To my knowledge (and judging from your description) there isn't a standard. Often people experiment to find what works best. I suspect the authors of the paper you're citing (but not naming, bad habit to get into) simply tried a few window sizes and found that 3x3 worked well enough without requiring too much computational power.
Have a read here and here. It will help you judge if a question is relevant on this forum, and if so, how to best write it.
Thank Rik. So I should try multiple tries to find best method of this problem:)
I mean a process like top image.
If each block is independent of the others, you could use blockproc to do the processing. Then you only need to decide how to reduce each block to a single value.
Yeh. My problem is this: how to reduce each block to a single value
That is a domain-specific question. I don't know what makes sense in your situation. Some usual methods are max, min, and mean (with or without a threshold).
Thank dear Rik. I tried them.
What did they do to segment the image before the filtering?
And after they filtered the segmented image with a 3x3 window, or any size with any values, what did they do with the filtered, segmented image? What values were in the filter kernel window?
Hi They segment image from background and then they labeled 3*3 windows with 0 and 1. I tried to do this with this picture that I attached. Thanks
I don't think they segmented the image. I think they did that on the original gray scale image. I don't think it would make any sense to do a covariance of 9 pixels if the 9 pixels were segmented, which means they are already binary/logical.
thanks dear Image Analyst. I think I made a mistake. I read the paper again. I knew they segment region of interested from background and then they do this operation on gray scale image.

Sign in to comment.

Answers (0)

Asked:

on 2 Aug 2020

Commented:

on 31 Aug 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!