Main Content

indexImages

Create image search index

Description

example

imageIndex = indexImages(imds) creates an invertedImageIndex object, imageIndex, that contains a search index for imds. Use imageIndex with the retrieveImages function to search for images.

example

imageIndex = indexImages(imds,bag) returns a search index that uses a custom bagOfFeatures object, bag. Use this syntax with the bag you created when you want to modify the number of visual words or the feature type used to create the image search index for imds.

imageIndex = indexImages(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments, using any of the preceding syntaxes.

This object supports parallel computing using multiple MATLAB® workers. Enable parallel computing from the Computer Vision Toolbox Preferences dialog box. To open Computer Vision Toolbox™ preferences, on the Home tab, in the Environment section, click Preferences. Then select Computer Vision Toolbox .

Examples

collapse all

Create an image set.

setDir  = fullfile(toolboxdir('vision'),'visiondata','imageSets','cups');
imds = imageDatastore(setDir);

Index the image set.

imageIndex = indexImages(imds)
Creating an inverted image index using Bag-Of-Features.
-------------------------------------------------------

Creating Bag-Of-Features.
-------------------------

* Selecting feature point locations using the Detector method.
* Extracting SURF features from the selected feature point locations.
** detectSURFFeatures is used to detect key points for feature extraction.

* Extracting features from 6 images...done. Extracted 1708 features.

* Keeping 80 percent of the strongest features from each category.

* Balancing the number of features across all image categories to improve clustering.
** Image category 1 has the least number of strongest features: 1366.
** Using the strongest 1366 features from each of the other image categories.

* Creating a 1366 word visual vocabulary.
* Number of levels: 1
* Branching factor: 1366
* Number of clustering steps: 1

* [Step 1/1] Clustering vocabulary level 1.
* Number of features          : 1366
* Number of clusters          : 1366
* Initializing cluster centers...100.00%.
* Clustering...completed 1/100 iterations (~0.01 seconds/iteration)...converged in 1 iterations.

* Finished creating Bag-Of-Features


Encoding images using Bag-Of-Features.
--------------------------------------

* Encoding 6 images...done.
Finished creating the image index.
imageIndex = 
  invertedImageIndex with properties:

         ImageLocation: {6x1 cell}
            ImageWords: [6x1 vision.internal.visualWords]
         WordFrequency: [0.1667 0.1667 0.1667 0.3333 0.1667 0.1667 0.1667 0.5000 0.3333 0.1667 0.3333 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.3333 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 ... ] (1x1366 double)
         BagOfFeatures: [1x1 bagOfFeatures]
               ImageID: [1 2 3 4 5 6]
        MatchThreshold: 0.0100
    WordFrequencyRange: [0.0100 0.9000]

Display the image set using the montage function.

thumbnailGallery = [];
for i = 1:length(imds.Files)
    I = readimage(imds,i);
    thumbnail = imresize(I,[300 300]);
    thumbnailGallery = cat(4,thumbnailGallery,thumbnail);
end

figure
montage(thumbnailGallery);

Select a query image.

queryImage = readimage(imds,2);
figure
imshow(queryImage)

Search the image set for similar image using query image. The best result is first.

indices = retrieveImages(queryImage,imageIndex)
indices = 5x1 uint32 column vector

   2
   1
   5
   4
   3

bestMatchIdx = indices(1);

Display the best match from the image set.

bestMatch = imageIndex.ImageLocation{bestMatchIdx}
bestMatch = 
'/mathworks/devel/bat/filer/batfs1904-0/Bdoc24a.2528353/build/matlab/toolbox/vision/visiondata/imageSets/cups/blueCup.jpg'
figure
imshow(bestMatch)

Create an image set.

setDir  = fullfile(toolboxdir('vision'),'visiondata','imageSets','cups');
imgSets = imageSet(setDir, 'recursive');

Display image set.

thumbnailGallery = [];
for i = 1:imgSets.Count
    I = read(imgSets, i);
    thumbnail = imresize(I, [300 300]);
    thumbnailGallery = cat(4, thumbnailGallery, thumbnail);
end

figure
montage(thumbnailGallery);

Train a bag of features using a custom feature extractor.

extractor = @exampleBagOfFeaturesExtractor;
bag = bagOfFeatures(imgSets,'CustomExtractor',extractor);
Creating Bag-Of-Features.
-------------------------
* Image category 1: cups
* Extracting features using a custom feature extraction function: exampleBagOfFeaturesExtractor.

* Extracting features from 6 images in image set 1...done. Extracted 115200 features.

* Keeping 80 percent of the strongest features from each category.

* Creating a 500 word visual vocabulary.
* Number of levels: 1
* Branching factor: 500
* Number of clustering steps: 1

* [Step 1/1] Clustering vocabulary level 1.
* Number of features          : 92160
* Number of clusters          : 500
* Initializing cluster centers...100.00%.
* Clustering...completed 46/100 iterations (~0.65 seconds/iteration)...converged in 46 iterations.

* Finished creating Bag-Of-Features

Use the trained bag of features to index the image set.

imageIndex = indexImages(imgSets,bag,'Verbose',false) 
imageIndex = 
  invertedImageIndex with properties:

         ImageLocation: {6x1 cell}
            ImageWords: [6x1 vision.internal.visualWords]
         WordFrequency: [1 1 1 1 1 1 1 0.8333 1 1 1 1 1 1 1 1 1 1 1 1 1 0.8333 1 1 1 1 1 1 0.8333 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.8333 1 1 1 0.8333 1 1 1 1 1 1 0.1667 1 1 1 0.8333 1 0.8333 1 1 1 1 1 1 1 1 1 1 0.8333 1 1 1 0.8333 1 ... ] (1x500 double)
         BagOfFeatures: [1x1 bagOfFeatures]
               ImageID: [1 2 3 4 5 6]
        MatchThreshold: 0.0100
    WordFrequencyRange: [0.0100 0.9000]

queryImage = read(imgSets,4);

figure
imshow(queryImage)

Search for the image from image index using query image.

indices = retrieveImages(queryImage,imageIndex);
bestMatch = imageIndex.ImageLocation{indices(1)};
figure
imshow(bestMatch)

Input Arguments

collapse all

Images, specified as an imageDatastore object. The object stores a collection of images.

Bag of visual words, specified as a bagOfFeatures object.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'Verbose',true sets the 'Verbose' property to true

Save feature locations, specified as the comma-separated pair consisting of 'SaveFeatureLocations' and a logical scalar. When set to true, the image feature locations are saved in the imageIndex output object. Use location data to verify spatial or geometric image search results. If you do not require feature locations, set this property to false to reduce memory consumption.

Display progress information, specified as the comma-separated pair consisting of 'Verbose' and a logical scalar.

Output Arguments

collapse all

Image search index, returned as an invertedImageIndex object.

Algorithms

imageIndex uses the bag-of-features framework with the speeded-up robust features (SURF) detector and extractor to learn a vocabulary of 20,000 visual words. The visual words are then used to create an index that maps visual words to the images in imds. You can use the index to search for images within imds that are similar to a given query image.

Extended Capabilities

Version History

Introduced in R2015a