# How do I compare two images?

39 views (last 30 days)
Stelios Fanourakis on 18 Mar 2019
Hi
I have two images.
One was manually segmented and the other used an automated method. Both images segments the same interface, but the automated is cropped for faster computation.
I am looking for a method, to compare the two images, so I can estimate the accuracy of the automated segmentation. How can I compare them? E.g. how can I find whether the red pixels locations at both images are the same. If not, how do I define their differences?
Any ideas?
Thank you

Walter Roberson on 18 Mar 2019
In the case where the two images are to the same scale, then use xcorr2 to find the place where the second image best fits into the first, after which you can do whatever comparisons using indexing.
If the images are not the same scale (e.g., the second one looks like it might be higher resolution) then you would need to do image registration in order to find the best match.
Stelios Fanourakis on 19 Mar 2019
@Walter. Since the two images, I need to compare are not of the same dimensions, I used imregister
imshowpair(a,e)
>> [optimizer,metric] = imregconfig('multimodal');
>> movingRegisteredDefault = imregister(a,e,'affine',optimizer,metric);
imshowpair(a,e) works fine but imregister not. I get the error
Error using imregtform>parseInputs (line 268)
The value of 'MovingImage' is invalid. All dimensions of the moving image should be greater than 4.
Error in imregtform (line 124)
parsedInputs = parseInputs(varargin{:});
Error in imregister (line 119)
tform = imregtform(varargin{:});
What does that mean?

Image Analyst on 18 Mar 2019
You have two ways of computing the segmentation. Which do YOU consider to be more accurate? If you want to compute accuracy, you must have some ground truth - some segmentation that YOU DEFINE to be the absolutely 100% correct answer. I'm assuming you think the manually traced one is the ground truth and want to see how well the automatic algorithm matches the manual one. To do that, you first need to crop out the regions so that both images have the same field of view (all corners point to the same physical points in the subject/sample in both images). Now you can crop the segmented (binary) images the same way and compare them with a similarity index, for example, the Sørensen–Dice coefficient or friends. See this link.
Stelios Fanourakis on 19 Mar 2019
@Image Analyst.
As I came to realize, imregister works for grayscale images only and it won't allow me to use the red channel for the contour.