How to remove unwanted portion from background?

Ostu's thresholding method is good and easy. For some images it clearly idetified the object in interest, but some other images it lefts some unwanted portion. When dealing with more 1000 images running in batch and applying ostu's method is not giving any good outputs. How can I improve this algorithms or any other idea where after applying ostu's methods we can remove unwanted portion and that will applicable for all images.

Answers (5)

It often does not work well for images that do not have a nice well separated bimodal histogram. The triangle method works well for skewed histograms, like with a log-normal shape. I'm attaching my implementation.
imbinarize() has an 'adaptive' option that might work well for you.

7 Comments

Tried applying 'adaptive' in imbinarize(), it removing some portion of image of interest
You may have to try some more sophisticated methods, which I can't speculate on since you haven't shared your image.
Attached few images. For some images I am getting only the hand sign clearly by applying ostu's and imbinarize () but for few images its becoming difficult to clear the background totally. Apllying different threshold is easy when I have more than 1000 images.
There is not good contrast between the hand and the person. What can you do to improve that? Like shine more light on it or use a contrasting background color or shirt color? If you can improve your image to start with, it will make the image processing SO much easier. Otherwise you might have to use deep learning (SegNet) to find the hand.
Thank you sir for the links. Still I want a help. How can we use any deep learning method in segmenting these hand gestures attached above??
@Zara Khan I don't know. I haven't done it. If none of the papers links I gave you uses that particular method (deep learning) and you don't want to use any of the successful methods that they developed, used and published, then you're on your own. I'm no further along than you are, and I don't plan on going into gesture research so don't wait on me.

Sign in to comment.

sir,may be upload some image samples to develop

1 Comment

Uploaded few images . Please see in Image Analyst's reply section. Thanks.

Sign in to comment.

sir,please check the follow code to get some information
clc; clear all; close all
urls={'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779463/img16.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779458/img6.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779453/img4.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779448/img2.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779443/img1.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779438/img1%20(2).png'};
for k = 1 : length(urls)
im = imread(urls{k});
if ndims(im) == 3
im = rgb2gray(im);
end
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 15, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
figure; imshow(im, []);
hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
end

6 Comments

This is not separating the hand object from background wholely. I want to completely remove the background.
@yanqi liu can you please explain me how you are calculating the rectangle??
It's from these lines of code:
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
OK well that's what happens if people don't comment their code. What find does is to find non-zero (white) pixel locations and put their row and column coordinates into variables r and c.
Then by using min and max she assumes that there is just one blob so the min and max will give you the left column, right column, top row, and bottom row. Now to use the rectangle function you need to pass it a position array in the form [xLeft, yTop, width, height] so those are the 4 expressions she put into this line of code:
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
Now you can pass that in for the 'Position' argument to the rectangle() function like this:
rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
@Image Analyst Thank you sir. But this method is not adaptive. Somehow working well for these attached images but when working 1320 images in batch the rectangle I am getting outside the image object means in background for some images. how to overcome this? Even I am not getting why these many filters are being used befire drawing the recctangles
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;

Sign in to comment.

sir,use some basic method may be not the best choice,so may be consider some DeepLearning method,such as unet
clc; clear all; close all
urls={'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779463/img16.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779458/img6.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779453/img4.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779448/img2.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779443/img1.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779438/img1%20(2).png'};
for k = 1 : length(urls)
im = imread(urls{k});
if ndims(im) == 3
im = rgb2gray(im);
end
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 12, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
bw = logical(bw);
bw = imfill(bw, 'holes');
bw = imclose(bw, strel('disk', 15));
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
im2 = im;
im2(~bw) = 0;
figure; imshow(im2, []);
%hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
end

16 Comments

Sir any demo using unet? I don't know how to implement this. I don't want the elbow section. The final output will be a binary image.
clc; clear all; close all
urls={'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779463/img16.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779458/img6.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779453/img4.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779448/img2.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779443/img1.png',...
'https://ww2.mathworks.cn/matlabcentral/answers/uploaded_files/779438/img1%20(2).png'};
for k = 1 : length(urls)
im = imread(urls{k});
if ndims(im) == 3
im = rgb2gray(im);
end
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 12, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
bw = logical(bw);
bw = imfill(bw, 'holes');
bw = imclose(bw, strel('disk', 15));
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
im2 = im;
im2(~bw) = 0;
figure; imshow(bw, []);
%hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
end
Can you please help me in segmenting hands using unet??
@yanqi liu why these cropped images are like this ? why not in perfect gesture form? why need to perform the strel operation again to get the cropped images? why output is not in the attached form?
@Zara Khan the image you have is not cropped. Neither were @yanqi liu's images.
What is "perfect gesture form"?
Using strel() to create a kernel that is then used in a morphological operation is not cropping or necessarily needed for cropping. Sometimes additional morphological operations are needed to "clean up" or smooth the segmented image that you got on the first pass.
Since @yanqi liu's segmented images are not cropped, and neither is your attached image, I don't know what your question "why output is not in the attached form?" means. Explain the difference.
Image analyst sir see the attachment S1_G1_(1) in the above comment. I want to see my output like the attached image in my previous comment.
I'm still not seeing the difference that you're seeing. They're both segmented images with a hand in them. Neither is cropped to the bounding box and both have a black surround. The actual hand is of course difference because it depends on the configuration of your hand when you snapped the photo. And the dimensions (rows, columns) of the images seem to be different. Is it that you want the image resized to the image size you posted?
@Image Analyst sir according to yangi liu coding the output results are some morphological operations. Where can't see the prominent fingers and palm. Where in my attached image after segmentation can see fingers and palm prominent. That will help us to further feature extraaction tasks. Sir please find the attached image?
The images she worked on only had one finger extended. Anyway, why are you relying so much on her algorithm rather than the successful, published ones I pointed you to? I kind of doubt that she is someone who specializes in gesture recognition, unlike those groups who have published algorithms that work.
@Image Analyst sir was trying to do some new
What if she or I gave you something "new" (just some simplistic and not very robust algorithm), and it was not as good as one of the published algorithms. Would you want to use it? Or would you rather use the better, more robust algorithm even though it was "old"?
@Image Analyst yes sir. I was trying to do as a part of task but failed to do so. This algorithm by @yanqi liu is not robust. working well few images only.
@Image Analyst sir one more question. why 'line' is used in strel function why not 'disk'?? and what is the kernel size in the above algorithm?
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 12, 0));
It would have been good if she had commented all the lines of code especially since the algorithm is long and fairly complicated. She's using line because she wanted to filter preferentially along a certain direction rather than isotropically in all directions. The imopen will get rid of tendrils that are vertical and less than 9 pixels wide. The imclose will fill in gaps along horizontal edges of the blob.

Sign in to comment.

clc; clear all; close all
im=imread('https://in.mathworks.com/matlabcentral/answers/uploaded_files/1109360/img_4.png');
im2 = imadjust(im,stretchlim(im),[]);
bw = imbinarize(im2,'adaptive','ForegroundPolarity','dark','Sensitivity',0.85);
bw = bwareaopen(bw, 100);
bw = imopen(bw, strel('line', 9, 90));
bw = imclose(bw, strel('line', 15, 0));
bw = imfill(bw, 'holes');
bw = bwareaopen(bw, 500);
[L,num] = bwlabel(bw);
vs = [];
for i = 1 : num
bwi = bw;
bwi(L~=i) = 0;
vi = mean(double(im2(logical(bwi))));
vs(i) = vi;
end
[~,ind] = max(vs);
bw(L~=ind) = 0;
[r,c] = find(logical(bw));
rect = [min(c) min(r) max(c)-min(c) max(r)-min(r)];
figure; imshow(im, []);
hold on; rectangle('Position', rect, 'EdgeColor', 'r', 'LineWidth', 2, 'LineStyle', '-');
@Image Analyst @yanqi liu sir this algorithm not able to segement this gesture

1 Comment

I am not a gesture recognition researcher. But I know that any successful algorithm will not just be one page of code. You need to use a robust algorithm already developed by specialists in the area who have published their algorithms. Go here to see a list of them:

Sign in to comment.

Asked:

on 16 Oct 2021

Commented:

on 27 Aug 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!