trainFastRCNNObjectDetector

Train a Fast R-CNN deep learning object detector

Syntax

trainedDetector = trainFastRCNNObjectDetector(trainingData,network,options)
trainedDetector = trainFastRCNNObjectDetector(trainingData,checkpoint,options)
trainedDetector = trainFastRCNNObjectDetector(trainingData,detector,options)
trainedDetector = trainFastRCNNObjectDetector(___,'RegionProposalFcn',proposalFcn)
trainedDetector = trainFastRCNNObjectDetector(___,Name,Value)
[trainedDetector,info] = trainFastRCNNObjectDetector(___)

Description

example

trainedDetector = trainFastRCNNObjectDetector(trainingData,network,options) trains a Fast R-CNN (regions with convolution neural networks) object detector using deep learning. You can train a Fast R-CNN detector to detect multiple object classes.

This function requires that you have Deep Learning Toolbox™. It is recommended that you also have Parallel Computing Toolbox™ to use with a CUDA®-enabled NVIDIA® GPU with compute capability 3.0 or higher.

trainedDetector = trainFastRCNNObjectDetector(trainingData,checkpoint,options) resumes training from a detector checkpoint.

trainedDetector = trainFastRCNNObjectDetector(trainingData,detector,options) continues training a detector with additional training data or performs more training iterations to improve detector accuracy.

trainedDetector = trainFastRCNNObjectDetector(___,'RegionProposalFcn',proposalFcn) optionally trains a custom region proposal function, proposalFcn, using any of the previous inputs. If you do not specify a proposal function, then the function uses a variation of the Edge Boxes[2] algorithm.

trainedDetector = trainFastRCNNObjectDetector(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments.

[trainedDetector,info] = trainFastRCNNObjectDetector(___) also returns information on the training progress, such as training loss and accuracy, for each iteration.

Examples

collapse all

Load training data.

data = load('rcnnStopSigns.mat', 'stopSigns', 'fastRCNNLayers');
stopSigns = data.stopSigns;
fastRCNNLayers = data.fastRCNNLayers;

Add fullpath to image files.

stopSigns.imageFilename = fullfile(toolboxdir('vision'),'visiondata', ...
    stopSigns.imageFilename);

Set network training options:

  • Set the CheckpointPath to save detector checkpoints to a temporary directory. Change this to another location if required.

options = trainingOptions('sgdm', ...
    'MiniBatchSize', 1, ...
    'InitialLearnRate', 1e-3, ...
    'MaxEpochs', 10, ...
    'CheckpointPath', tempdir);

Train the Fast R-CNN detector. Training can take a few minutes to complete.

frcnn = trainFastRCNNObjectDetector(stopSigns, fastRCNNLayers , options, ...
    'NegativeOverlapRange', [0 0.1], ...
    'PositiveOverlapRange', [0.7 1], ...
    'SmallestImageDimension', 600);
*******************************************************************
Training a Fast R-CNN Object Detector for the following object classes:

* stopSign

--> Extracting region proposals from 27 training images...done.

Training on single GPU.
|=======================================================================================================|
|  Epoch  |  Iteration  |  Time Elapsed  |  Mini-batch  |  Mini-batch  |  Mini-batch  |  Base Learning  |
|         |             |   (hh:mm:ss)   |     Loss     |   Accuracy   |     RMSE     |      Rate       |
|=======================================================================================================|
|       1 |           1 |       00:00:00 |       0.0366 |       99.22% |         1.14 |          0.0010 |
|       3 |          50 |       00:00:10 |       0.0171 |      100.00% |         1.09 |          0.0010 |
|       5 |         100 |       00:00:21 |       0.0020 |      100.00% |         0.28 |          0.0010 |
|       8 |         150 |       00:00:32 |       0.0205 |      100.00% |         0.78 |          0.0010 |
|      10 |         200 |       00:00:42 |       0.0098 |      100.00% |         0.36 |          0.0010 |
|      10 |         210 |       00:00:44 |       0.0216 |      100.00% |         0.89 |          0.0010 |
|=======================================================================================================|

Detector training complete.
*******************************************************************

Test the Fast R-CNN detector on a test image.

img = imread('stopSignTest.jpg');

Run the detector.

[bbox, score, label] = detect(frcnn, img);

Display detection results.

detectedImg = insertShape(img, 'Rectangle', bbox);
figure
imshow(detectedImg)

Input Arguments

collapse all

Labeled ground truth images, specified as a table with two or more columns. The first column must contain paths and file names to grayscale or truecolor (RGB) images. The remaining columns must contain bounding boxes related to the corresponding image. Each column represents a single object class, such as a car, dog, flower, or stop sign.

Each bounding box must be in the format [x y width height]. The format specifies the upper-left corner location and size of the object in the corresponding image. The table variable name defines the object class name. To create the ground truth table, use the Image Labeler or Video Labeler app. Boxes smaller than 32-by-32 are not used for training.

Network, specified as a SeriesNetwork, an array of Layer objects, a layerGraph object, or by the network name. The network is trained to classify the object classes defined in the trainingData table. The SeriesNetwork, Layer, and layerGraph objects are available in the Deep Learning Toolbox.

  • When you specify the network as a SeriesNetwork, an array of Layer objects, or by the network name, the network is automatically transformed into a Fast R-CNN network by adding an ROI max pooling layer, and new classification and regression layers to support object detection. Additionally, the GridSize property of the ROI max pooling layer is set to the output size of the last max pooling layer in the network.

  • The array of Layer objects must contain a classification layer that supports the number of object classes, plus a background class. Use this input type to customize the learning rates of each layer. An example of an array of Layer objects:

    layers = [imageInputLayer([28 28 3])
            convolution2dLayer([5 5],10)
            reluLayer()
            fullyConnectedLayer(10)
            softmaxLayer()
            classificationLayer()];
    

  • When you specify the network as SeriesNetwork, Layer array, or network by name, the weights for additional convolution and fully-connected layers that you add to create the network, are initialized to 'narrow-normal'.

  • The network name must be one of the following valid network names. You must also install the corresponding Add-on.

    Network NameFeature Extraction Layer NameROI Pooling Layer OutputSizeDescription
    alexnet'relu5'[6 6]Last max pooling layer is replaced by ROI max pooling layer
    vgg16'relu5_3'[7 7]
    vgg19'relu5_4'
    squeezenet'fire5-concat'[14 14]
    resnet18'res4b_relu'ROI pooling layer is inserted after the feature extraction layer.
    resnet50'activation_40_relu'
    resnet101'res4b22_relu'
    googlenet'inception_4d-output'
    mobilenetv2'block_13_expand_relu'
    inceptionv3'mixed7'[17 17]
    inceptionresnetv2'block17_20_ac'

  • The LayerGraph object must be a valid Fast R-CNN object detection network. You can also use a LayerGraph object to train a custom Fast R-CNN network.

    Tip

    If your network is a DAGNetwork, use the layerGraph function to convert the network to a LayerGraph object. Then, create a custom Fast R-CNN network as described by the Create Fast R-CNN Object Detection Network example.

See R-CNN, Fast R-CNN, and Faster R-CNN Basics to learn more about how to create a Fast R-CNN network.

Training options, returned by the trainingOptions function from the Deep Learning Toolbox. To specify solver and other options for network training, use trainingOptions.

Note

trainFastRCNNObjectDetector does not support these training options:

  • The Plots value: 'training-progress'

  • The ValidationData, ValidationFrequency, or ValidationPatience options

  • The OutputFcn option.

Saved detector checkpoint, specified as a fastRCNNObjectDetector object. To save the detector after every epoch, set the 'CheckpointPath' property when using the trainingOptions function. Saving a checkpoint after every epoch is recommended because network training can take a few hours.

To load a checkpoint for a previously trained detector, load the MAT-file from the checkpoint path. For example, if the 'CheckpointPath' property of options is '/tmp', load a checkpoint MAT-file using:

data = load('/tmp/faster_rcnn_checkpoint__105__2016_11_18__14_25_08.mat');

The name of the MAT-file includes the iteration number and timestamp of when the detector checkpoint was saved. The detector is saved in the detector variable of the file. Pass this file back into the trainFastRCNNObjectDetector function:

frcnn = trainFastRCNNObjectDetector(stopSigns,...
                           data.detector,options);

Previously trained Fast R-CNN object detector, specified as a fastRCNNObjectDetector object.

Region proposal method, specified as a function handle. IF you do not specify a region proposal function, the function implements a variant of the EdgeBoxes[2] algorithm. The function must have the form:

[bboxes,scores] = proposalFcn(I)

The input, I, is an image defined in the trainingData table. The function must return rectangular bound boxes, bboxes, in an m-by-4 array. Each row of bboxes contains a four-element vector, [x,y,width,height]. This vector specifies the upper-left corner and size of a bounding box in pixels. The function must also return a score for each bounding box in an m-by-1 vector. Higher score values indicate that the bounding box is more likely to contain an object. The scores are used to select the strongest n regions, where n is defined by the value of NumStrongestRegions.

If you do not specify a custom proposal function, the function uses a variation of the Edge Boxes algorithm.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'PositiveOverlapRange',[0.75 1]

Bounding box overlap ratios for positive training samples, specified as the comma-separated pair consisting of 'PositiveOverlapRange' and a two-element vector. The vector contains values in the range [0,1]. Region proposals that overlap with ground truth bounding boxes within the specified range are used as positive training samples.

The overlap ratio used for both the PositiveOverlapRange and NegativeOverlapRange is defined as:

area(AB)area(AB)


A and B are bounding boxes.

Bounding box overlap ratios for negative training samples, specified as the comma-separated pair consisting of NegativeOverlapRange and a two-element vector. The vector contains values in the range [0,1]. Region proposals that overlap with the ground truth bounding boxes within the specified range are used as negative training samples.

The overlap ratio used for both the PositiveOverlapRange and NegativeOverlapRange is defined as:

area(AB)area(AB)


A and B are bounding boxes.

Maximum number of strongest region proposals to use for generating training samples, specified as the comma-separated pair consisting of 'NumStrongestRegions' and a positive integer. Reduce this value to speed up processing time at the cost of training accuracy. To use all region proposals, set this value to Inf.

Number of region proposals to randomly sample from each training image, specified by an integer. Reduce the number of regions to sample to reduce memory usage and speed-up training. Reducing the value can also decrease training accuracy.

Length of smallest image dimension, either width or height, specified as the comma-separated pair consisting of 'SmallestImageDimension' and a positive integer. Training images are resized such that the length of the shortest dimension is equal to the specified integer. By default, training images are not resized. Resizing training images helps reduce computational costs and memory used when training images are large. Typical values range from 400–600 pixels.

Frozen batch normalization during training, specified as the comma-separated pair consisting of 'FreezeBatchNormalization' and true or false. The value indicates whether the input layers to the network are frozen during training. Set this value to true if you are training with a small mini-batch size. Small batch sizes result in poor estimates of the batch mean and variance that is required for effective batch normalization.

If you do not specify a value for 'FreezeBatchNormalization', the function sets the property to

  • true if the 'MiniBatchSize' name-value argument for the trainingOptions function is less than 8.

  • false if the 'MiniBatchSize' name-value argument for the trainingOptions function is greater than or equal to 8.

You must specify a value for 'FreezeBatchNormalization' to overide this default behavior.

Output Arguments

collapse all

Trained Fast R-CNN object detector, returned as a fastRCNNObjectDetector object.

Training information, returned as a structure with the following fields. Each field is a numeric vector with one element per training iteration. Values that have not been calculated at a specific iteration are represented by NaN.

  • TrainingLoss — Training loss at each iteration. This is the combination of the classification and regression loss used to train the Fast R-CNN network.

  • TrainingAccuracy — Training set accuracy at each iteration

  • TrainingRMSE — Training root mean square error (RMSE) for the box regression layer

  • BaseLearnRate — Learning rate at each iteration

Tips

  • To accelerate data preprocessing for training, trainFastRCNNObjectDetector automatically creates and uses a parallel pool based on your parallel preference settings. For more details about setting these preferences, see parallel preference settings. Using parallel computing preferences requires Parallel Computing Toolbox.

  • VGG-16, VGG-19, ResNet-101, and Inception-ResNet-v2 are large models. Training with large images can produce "Out of Memory" errors. To mitigate these errors, try one or more of these options:

  • This function supports transfer learning. When you input a network by name, such as 'resnet50', then the function automatically transforms the network into a valid Fast R-CNN network model based on the pretrained resnet50 model. Alternatively, manually specify a custom Fast R-CNN network by using the LayerGraph extracted from a pretrained DAG network. For more details, see Create Fast R-CNN Object Detection Network.

  • This table describes how to transform each named network into a Fast R-CNN network. The feature extraction layer name specifies which layer is processed by the ROI pooling layer. The ROI output size specifies the size of the feature maps output by the ROI pooling layer.

    Network NameFeature Extraction Layer NameROI Pooling Layer OutputSizeDescription
    alexnet'relu5'[6 6]Last max pooling layer is replaced by ROI max pooling layer
    vgg16'relu5_3'[7 7]
    vgg19'relu5_4'
    squeezenet'fire5-concat'[14 14]
    resnet18'res4b_relu'ROI pooling layer is inserted after the feature extraction layer.
    resnet50'activation_40_relu'
    resnet101'res4b22_relu'
    googlenet'inception_4d-output'
    mobilenetv2'block_13_expand_relu'
    inceptionv3'mixed7'[17 17]
    inceptionresnetv2'block17_20_ac'

    To modify and transform a network into a Fast R-CNN network, see Design an R-CNN, Fast R-CNN, and a Faster R-CNN Model.

  • Use the trainingOptions function to enable or disable verbose printing.

References

[1] Girshick, Ross. "Fast R-CNN." Proceedings of the IEEE International Conference on Computer Vision. 2015.

[2] Zitnick, C. Lawrence, and Piotr Dollar. "Edge Boxes: Locating Object Proposals From Edges." Computer Vision-ECCV 2014. Springer International Publishing, 2014, pp. 391–405.

Extended Capabilities

Introduced in R2017a