Main Content

batchNormalizationLayer

Batch normalization layer

Description

A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.

Creation

Description

layer = batchNormalizationLayer creates a batch normalization layer.

example

layer = batchNormalizationLayer(Name,Value) creates a batch normalization layer and sets the optional TrainedMean, TrainedVariance, Epsilon, Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value pairs. For example, batchNormalizationLayer('Name','batchnorm') creates a batch normalization layer with the name 'batchnorm'.

Properties

expand all

Batch Normalization

Mean statistic used for prediction, specified as a numeric vector of per-channel mean values.

Depending on the type of layer input, the trainnet, trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically reshape this property to have of the following sizes:

Layer InputProperty Size
feature inputNumChannels-by-1
vector sequence input
1-D image input1-by-NumChannels
1-D image sequence input
2-D image input1-by-1-by-NumChannels
2-D image sequence input
3-D image input1-by-1-by-1-by-NumChannels
3-D image sequence input

If the BatchNormalizationStatistics training option is 'moving', then the software approximates the batch normalization statistics during training using a running estimate and, after training, sets the TrainedMean and TrainedVariance properties to the latest values of the moving estimates of the mean and variance, respectively.

If the BatchNormalizationStatistics training option is 'population', then after network training finishes, the software passes through the data once more and sets the TrainedMean and TrainedVariance properties to the mean and variance computed from the entire training data set, respectively.

The layer uses TrainedMean and TrainedVariance to normalize the input during prediction.

Data Types: single | double

Variance statistic used for prediction, specified as a numeric vector of per-channel variance values.

Depending on the type of layer input, the trainnet, trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically reshape this property to have of the following sizes:

Layer InputProperty Size
feature inputNumChannels-by-1
vector sequence input
1-D image input1-by-NumChannels
1-D image sequence input
2-D image input1-by-1-by-NumChannels
2-D image sequence input
3-D image input1-by-1-by-1-by-NumChannels
3-D image sequence input

If the BatchNormalizationStatistics training option is 'moving', then the software approximates the batch normalization statistics during training using a running estimate and, after training, sets the TrainedMean and TrainedVariance properties to the latest values of the moving estimates of the mean and variance, respectively.

If the BatchNormalizationStatistics training option is 'population', then after network training finishes, the software passes through the data once more and sets the TrainedMean and TrainedVariance properties to the mean and variance computed from the entire training data set, respectively.

The layer uses TrainedMean and TrainedVariance to normalize the input during prediction.

Data Types: single | double

Constant to add to the mini-batch variances, specified as a positive scalar.

The software adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Before R2023a: Epsilon must be greater than or equal to 1e-5.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

This property is read-only.

Number of input channels, specified as one of the following:

  • 'auto' — Automatically determine the number of input channels at training time.

  • Positive integer — Configure the layer for the specified number of input channels. NumChannels and the number of channels in the layer input data must match. For example, if the input is an RGB image, then NumChannels must be 3. If the input is the output of a convolutional layer with 16 filters, then NumChannels must be 16.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Parameters and Initialization

Function to initialize the channel scale factors, specified as one of the following:

  • 'ones' – Initialize the channel scale factors with ones.

  • 'zeros' – Initialize the channel scale factors with zeros.

  • 'narrow-normal' – Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form scale = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel scale factors when the Scale property is empty.

Data Types: char | string | function_handle

Function to initialize the channel offsets, specified as one of the following:

  • 'zeros' – Initialize the channel offsets with zeros.

  • 'ones' – Initialize the channel offsets with ones.

  • 'narrow-normal' – Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.

  • Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form offset = func(sz), where sz is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel offsets when the Offset property is empty.

Data Types: char | string | function_handle

Channel scale factors γ, specified as a numeric array.

The channel scale factors are learnable parameters. When you train a network using the trainnet or trainNetwork function, or initialize a dlnetwork object, if Scale is nonempty, then the software uses the Scale property as the initial value. If Scale is empty, then the software uses the initializer specified by ScaleInitializer.

Depending on the type of layer input, the trainnet, trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically reshape this property to have of the following sizes:

Layer InputProperty Size
feature inputNumChannels-by-1
vector sequence input
1-D image input1-by-NumChannels
1-D image sequence input
2-D image input1-by-1-by-NumChannels
2-D image sequence input
3-D image input1-by-1-by-1-by-NumChannels
3-D image sequence input

Data Types: single | double

Channel offsets β, specified as a numeric vector.

The channel offsets are learnable parameters. When you train a network using the trainnet or trainNetwork function, or initialize a dlnetwork object, if Offset is nonempty, then the software uses the Offset property as the initial value. If Offset is empty, then the software uses the initializer specified by OffsetInitializer.

Depending on the type of layer input, the trainnet, trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically reshape this property to have of the following sizes:

Layer InputProperty Size
feature inputNumChannels-by-1
vector sequence input
1-D image input1-by-NumChannels
1-D image sequence input
2-D image input1-by-1-by-NumChannels
2-D image sequence input
3-D image input1-by-1-by-1-by-NumChannels
3-D image sequence input

Data Types: single | double

Decay value for the moving mean computation, specified as a numeric scalar between 0 and 1.

When you use the trainNetwork or trainnet function and the BatchNormalizationStatistics training option is 'moving', at each iteration, the layer updates the moving mean value using

μ*=λμμ^+(1λμ)μ,

where μ* denotes the updated mean, λμ denotes the mean decay value, μ^ denotes the mean of the layer input, and μ denotes the latest value of the moving mean value.

When you use the trainNetwork or trainnet function and the BatchNormalizationStatistics training option is 'population', this option has no effect.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Decay value for the moving variance computation, specified as a numeric scalar between 0 and 1.

When you use the trainNetwork or trainnet function and the BatchNormalizationStatistics training option is 'moving', at each iteration, the layer updates the moving variance value using

σ2*=λσ2σ2^+(1λσ2)σ2,

where σ2* denotes the updated variance, λσ2 denotes the variance decay value, σ2^ denotes the variance of the layer input, and σ2 denotes the latest value of the moving variance value.

When you use the trainNetwork or trainnet function and the BatchNormalizationStatistics training option is 'population', this option has no effect.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning Rate and Regularization

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if OffsetLearnRateFactor is 2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, if ScaleL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, if OffsetL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet, trainNetwork, assembleNetwork, layerGraph, and dlnetwork functions automatically assign names to layers with the name "".

The BatchNormalizationLayer object stores this property as a character vector.

Data Types: char | string

This property is read-only.

Number of inputs to the layer, returned as 1. This layer accepts a single input only.

Data Types: double

This property is read-only.

Input names, returned as {'in'}. This layer accepts a single input only.

Data Types: cell

This property is read-only.

Number of outputs from the layer, returned as 1. This layer has a single output only.

Data Types: double

This property is read-only.

Output names, returned as {'out'}. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a batch normalization layer with the name 'BN1'.

layer = batchNormalizationLayer('Name','BN1')
layer = 
  BatchNormalizationLayer with properties:

               Name: 'BN1'
        NumChannels: 'auto'

   Hyperparameters
          MeanDecay: 0.1000
      VarianceDecay: 0.1000
            Epsilon: 1.0000e-05

   Learnable Parameters
             Offset: []
              Scale: []

   State Parameters
        TrainedMean: []
    TrainedVariance: []

Use properties method to see a list of all properties.

Include batch normalization layers in a Layer array.

layers = [
    imageInputLayer([32 32 3]) 
  
    convolution2dLayer(3,16,'Padding',1)
    batchNormalizationLayer
    reluLayer   
    
    maxPooling2dLayer(2,'Stride',2)
    
    convolution2dLayer(3,32,'Padding',1)
    batchNormalizationLayer
    reluLayer
          
    fullyConnectedLayer(10)
    softmaxLayer
    classificationLayer
    ]
layers = 
  11x1 Layer array with layers:

     1   ''   Image Input             32x32x3 images with 'zerocenter' normalization
     2   ''   2-D Convolution         16 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
     3   ''   Batch Normalization     Batch normalization
     4   ''   ReLU                    ReLU
     5   ''   2-D Max Pooling         2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     6   ''   2-D Convolution         32 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
     7   ''   Batch Normalization     Batch normalization
     8   ''   ReLU                    ReLU
     9   ''   Fully Connected         10 fully connected layer
    10   ''   Softmax                 softmax
    11   ''   Classification Output   crossentropyex

Algorithms

expand all

References

[1] Ioffe, Sergey, and Christian Szegedy. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” Preprint, submitted March 2, 2015. https://arxiv.org/abs/1502.03167.

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Version History

Introduced in R2017b

expand all