Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

evaluateDetectionPrecision

Evaluate precision metric for object detection

Syntax

averagePrecision = evaluateDetectionPrecision(detectionResults,groundTruthTable)
[averagePrecision,recall,precision] = evaluateDetectionPrecision(___)
[___] = evaluateDetectionPrecision(___,threshold)

Description

averagePrecision = evaluateDetectionPrecision(detectionResults,groundTruthTable) returns the average precision, of the detectionResults compared to groundTruthTable, which is used to measure the performance of the object detector. For multiclass detector, the precision is a vector of scores for each object class in the order specified by groundTruthTable.

example

[averagePrecision,recall,precision] = evaluateDetectionPrecision(___) returns data points for plotting the precision–recall curve, using input arguments from the previous syntax.

[___] = evaluateDetectionPrecision(___,threshold) specifies the overlap threshold for assigning a detection to a ground truth box.

Examples

collapse all

Train an ACF-based detector using pre-loaded ground truth information. Run the detector on the training images. Evaluate the detector and display the precision-recall curve.

Load the ground truth table.

load('stopSignsAndCars.mat')
stopSigns = stopSignsAndCars(:,1:2);
stopSigns.imageFilename = fullfile(toolboxdir('vision'),'visiondata', ...
    stopSigns.imageFilename);

Train an ACF-based detector.

detector = trainACFObjectDetector(stopSigns,'NumStages',3);
ACF Object Detector Training
The training will take 3 stages. The model size is 34x31.
Sample positive examples(~100% Completed)
Compute approximation coefficients...Completed.
Compute aggregated channel features...Completed.
--------------------------------------------
Stage 1:
Sample negative examples(~100% Completed)
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 19 weak learners.
--------------------------------------------
Stage 2:
Sample negative examples(~100% Completed)
Found 210 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 51 weak learners.
--------------------------------------------
Stage 3:
Sample negative examples(~100% Completed)
Found 210 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 210 negative examples...Completed.
The trained classifier has 87 weak learners.
--------------------------------------------
ACF object detector training is completed. Elapsed time is 20.0043 seconds.

Create a table to store the results.

numImages = height(stopSigns);
results(numImages) = struct('Boxes',[],'Scores',[]);

Run the detector on the training images. Store the results as a table.

for i = 1:numImages
    I = imread(stopSigns.imageFilename{i});
    [bboxes,scores] = detect(detector,I);
    results(i).Boxes = bboxes;
    results(i).Scores = scores;
end

results = struct2table(results);

Evaluate the results against the ground truth data. Get the precision statistics.

[ap,recall,precision] = evaluateDetectionPrecision(results,stopSigns(:,2));

Plot the precision-recall curve.

figure
plot(recall,precision)
grid on
title(sprintf('Average Precision = %.1f',ap))

Input Arguments

collapse all

Object locations and scores, specified as a two-column table containing the bounding boxes and scores for each detected object. For multiclass detection, a third column contains the predicted label for each detection.

When detecting objects, you can create the detection results table by using struct2table to combine the bboxes and scores outputs:

   for i = 1:numImages
        I = imread(imageFilename{i});
        [bboxes,scores] = detect(detector,I);
        results(i).Boxes = bboxes;
        results(i).Scores = scores;
    end
 results = struct2table(results); 

Data Types: table

Labeled ground truth images, specified as a table with two or more columns. The first column must contain paths and file names to grayscale or truecolor (RGB) images. The remaining columns must contain bounding boxes related to the corresponding image. Each column represents a single object class, such as a car, dog, flower, or stop sign.

Each bounding box must be in the format [x,y,width,height]. The format specifies the upper-left corner location and the size of the object in the corresponding image. The table variable name defines the object class name. To create the ground truth table, use the Training Image Labeler app.

Overlap threshold for assigned a detection to a ground truth box, specified as a numeric scalar. The overlap ratio is computed as the intersection over union.

Output Arguments

collapse all

Average precision over all the detection results, returned as a numeric scalar or vector. Precision is a ratio of true positive instances to all positive instances of objects in the detector, based on the ground truth. For a multiclass detector, the average precision is a vector of average precision scores for each object class.

Recall values from each detection, returned as a vector of numeric scalars or as a cell array. Recall is a ratio of true positive instances to the sum of true positives and false negatives in the detector, based on the ground truth. For a multiclass detector, recall and precision are cell arrays, where each cell contains the data points for each object class.

Precision values from each detection, returned as a vector of numeric scalars or as a cell array. Precision is a ratio of true positive instances to all positive instances of objects in the detector, based on the ground truth. For a multi-class detector, recall and precision are cell arrays, where each cell contains the data points for each object class.

Introduced in R2017a

Was this topic helpful?