Main Content

globalMaxPooling2dLayer

Global max pooling layer

Since R2020a

Description

A 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input.

Creation

Description

layer = globalMaxPooling2dLayer creates a global max pooling layer.

example

layer = globalMaxPooling2dLayer('Name',name) sets the optional Name property.

Properties

expand all

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to layers with the name "".

The GlobalMaxPooling2DLayer object stores this property as a character vector.

Data Types: char | string

This property is read-only.

Number of inputs to the layer, returned as 1. This layer accepts a single input only.

Data Types: double

This property is read-only.

Input names, returned as {'in'}. This layer accepts a single input only.

Data Types: cell

This property is read-only.

Number of outputs from the layer, returned as 1. This layer has a single output only.

Data Types: double

This property is read-only.

Output names, returned as {'out'}. This layer has a single output only.

Data Types: cell

Object Functions

Examples

collapse all

Create a global max pooling layer with the name 'gmp1'.

layer = globalMaxPooling2dLayer('Name','gmp1')
layer = 
  GlobalMaxPooling2DLayer with properties:

    Name: 'gmp1'

Include a global max pooling layer in a Layer array.

layers = [ ...
    imageInputLayer([28 28 1])
    convolution2dLayer(5,20)
    reluLayer
    globalMaxPooling2dLayer
    fullyConnectedLayer(10)
    softmaxLayer]
layers = 
  6x1 Layer array with layers:

     1   ''   Image Input              28x28x1 images with 'zerocenter' normalization
     2   ''   2-D Convolution          20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
     3   ''   ReLU                     ReLU
     4   ''   2-D Global Max Pooling   2-D global max pooling
     5   ''   Fully Connected          10 fully connected layer
     6   ''   Softmax                  softmax

Tips

  • In an image classification network, you can use a globalMaxPooling2dLayer before the final fully connected layer to reduce the size of the activations without sacrificing performance. The reduced size of the activations means that the downstream fully connected layers will have fewer weights, reducing the size of your network.

  • You can use a globalMaxPooling2dLayer towards the end of a classification network instead of a fullyConnectedLayer. Since global pooling layers have no learnable parameters, they can be less prone to overfitting and can reduce the size of the network. These networks can also be more robust to spatial translations of input data. You can also replace a fully connected layer with a globalAveragePooling2dLayer instead. Whether a globalAveragePooling2dLayer or a globalMaxPooling2dLayer is more appropriate depends on your data set.

    To use a global average pooling layer instead of a fully connected layer, the size of the input to globalMaxPooling2dLayer must match the number of classes in the classification problem

Algorithms

expand all

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Version History

Introduced in R2020a