Main Content

estimateNetworkMetrics

Estimate metrics for a specific layers of a neural network

    Description

    These are the estimated metrics:

    • Number of learnables — Total number of weights and biases

    • Number of operations — Total number of multiplications and additions

    • Parameter memory — Memory required to store all of the learnables

    • Number of MACs — Number of multiply-accumulate operations

    • Arithmetic intensity — Amount of data reuse of data that is being fetched from memory. A high arithmetic intensity indicates more data reuse.

    example

    estNet = estimateNetworkMetrics(net) returns the metrics information for the layers in a neural network. The data is returned as a table.

    example

    [estNet1,estNet2] = estimateNetworkMetric(net,net2) returns the metrics information for the layers in a neural network. The data is returned as a table.

    Examples

    collapse all

    This example shows how to estimate the metrics for a neural network.

    Load the Pretrained Network

    Load the pretrained SqueezeNet network. If you do not have the required support packages installed, the software provides a download link.

    load squeezenetmerch
    net
    net = 
      DAGNetwork with properties:
    
             Layers: [68×1 nnet.cnn.layer.Layer]
        Connections: [75×2 table]
         InputNames: {'data'}
        OutputNames: {'new_classoutput'}
    
    

    Estimate Network Layer Metrics

    Use the estimateNetworkMetrics function to show the metrics for each layer in your networks.

    estNet = estimateNetworkMetrics(net)
    estNet=26×8 table
            LayerName           LayerType      NumberOfLearnables    NumberOfOperations    ParameterMemory (MB)    RuntimeMemory (MB)    NumberOfMACs    ArithmeticIntensity
        __________________    _____________    __________________    __________________    ____________________    __________________    ____________    ___________________
    
        "conv1"               "Convolution"           1792                4.413e+07             0.0068359                505.02           2.2065e+07           25.739       
        "fire2-squeeze1x1"    "Convolution"           1040               6.4225e+06             0.0039673                  73.5           3.2113e+06           12.748       
        "fire2-expand1x1"     "Convolution"           1088               6.4225e+06             0.0041504                  73.5           3.2113e+06           12.748       
        "fire2-expand3x3"     "Convolution"           9280               5.7803e+07                0.0354                 661.5           2.8901e+07           111.12       
        "fire3-squeeze1x1"    "Convolution"           2064               1.2845e+07             0.0078735                   147           6.4225e+06           14.158       
        "fire3-expand1x1"     "Convolution"           1088               6.4225e+06             0.0041504                  73.5           3.2113e+06           12.748       
        "fire3-expand3x3"     "Convolution"           9280               5.7803e+07                0.0354                 661.5           2.8901e+07           111.12       
        "fire4-squeeze1x1"    "Convolution"           4128               6.4225e+06              0.015747                  73.5           3.2113e+06           24.791       
        "fire4-expand1x1"     "Convolution"           4224               6.4225e+06              0.016113                  73.5           3.2113e+06           24.791       
        "fire4-expand3x3"     "Convolution"          36992               5.7803e+07               0.14111                 661.5           2.8901e+07           178.07       
        "fire5-squeeze1x1"    "Convolution"           8224               1.2845e+07              0.031372                   147           6.4225e+06           27.449       
        "fire5-expand1x1"     "Convolution"           4224               6.4225e+06              0.016113                  73.5           3.2113e+06           24.791       
        "fire5-expand3x3"     "Convolution"          36992               5.7803e+07               0.14111                 661.5           2.8901e+07           178.07       
        "fire6-squeeze1x1"    "Convolution"          12336               4.8169e+06              0.047058                55.125           2.4084e+06            33.51       
        "fire6-expand1x1"     "Convolution"           9408               3.6127e+06              0.035889                41.344           1.8063e+06           32.109       
        "fire6-expand3x3"     "Convolution"          83136               3.2514e+07               0.31714                372.09           1.6257e+07           125.07       
          ⋮
    
    

    This example shows how to estimate the metrics for multiple neural network.

    Load the Pretrained Network

    Load the pretrained SqueezeNet networks. If you do not have the required support packages installed, the software provides a download link.

    net1 = alexnet
    net1 = 
      SeriesNetwork with properties:
    
             Layers: [25x1 nnet.cnn.layer.Layer]
         InputNames: {'data'}
        OutputNames: {'output'}
    
    
    net2 = googlenet
    net2 = 
      DAGNetwork with properties:
    
             Layers: [144x1 nnet.cnn.layer.Layer]
        Connections: [170x2 table]
         InputNames: {'data'}
        OutputNames: {'output'}
    
    

    Estimate Multiple Network Layer Metrics

    Use the estimateNetworkMetrics function to show the metrics for each layer in your networks.

    [estNet1,estNet2] = estimateNetworkMetrics(net1,net2)
    estNet1=8×7 table
        LayerName          LayerType          NumberOfLearnables    NumberOfOperations    ParameterMemory (MB)    NumberOfMACs    ArithmeticIntensity
        _________    _____________________    __________________    __________________    ____________________    ____________    ___________________
    
         "conv1"     "Convolution"                     34944            2.1083e+08               0.1333            1.0542e+08            315.31      
         "conv2"     "Grouped Convolution"        3.0746e+05             4.479e+08               1.1729            2.2395e+08            397.21      
         "conv3"     "Convolution"                8.8512e+05            2.9904e+08               3.3765            1.4952e+08            150.59      
         "conv4"     "Grouped Convolution"        6.6394e+05            2.2428e+08               2.5327            1.1214e+08            141.35      
         "conv5"     "Grouped Convolution"        4.4262e+05            1.4952e+08               1.6885             7.476e+07             135.8      
         "fc6"       "Fully Connected"            3.7753e+07            7.5497e+07               144.02            3.7749e+07           0.99965      
         "fc7"       "Fully Connected"            1.6781e+07            3.3554e+07               64.016            1.6777e+07           0.99951      
         "fc8"       "Fully Connected"             4.097e+06             8.192e+06               15.629             4.096e+06           0.99876      
    
    
    estNet2=58×7 table
                LayerName              LayerType      NumberOfLearnables    NumberOfOperations    ParameterMemory (MB)    NumberOfMACs    ArithmeticIntensity
        _________________________    _____________    __________________    __________________    ____________________    ____________    ___________________
    
        "conv1-7x7_s2"               "Convolution"              9472            2.3603e+08              0.036133           1.1801e+08           138.86       
        "conv2-3x3_reduce"           "Convolution"              4160             2.569e+07              0.015869           1.2845e+07           31.677       
        "conv2-3x3"                  "Convolution"        1.1078e+05            6.9363e+08               0.42261           3.4682e+08            379.7       
        "inception_3a-1x1"           "Convolution"             12352            1.9268e+07              0.047119           9.6338e+06           45.231       
        "inception_3a-3x3_reduce"    "Convolution"             18528            2.8901e+07              0.070679           1.4451e+07            59.17       
        "inception_3a-3x3"           "Convolution"        1.1072e+05            1.7341e+08               0.42236           8.6704e+07           302.94       
        "inception_3a-5x5_reduce"    "Convolution"              3088            4.8169e+06               0.01178           2.4084e+06           14.496       
        "inception_3a-5x5"           "Convolution"             12832             2.007e+07               0.04895           1.0035e+07           198.98       
        "inception_3a-pool_proj"     "Convolution"              6176            9.6338e+06               0.02356           4.8169e+06           26.501       
        "inception_3b-1x1"           "Convolution"             32896             5.138e+07               0.12549            2.569e+07           76.957       
        "inception_3b-3x3_reduce"    "Convolution"             32896             5.138e+07               0.12549            2.569e+07           76.957       
        "inception_3b-3x3"           "Convolution"        2.2138e+05            3.4682e+08               0.84448           1.7341e+08           367.34       
        "inception_3b-5x5_reduce"    "Convolution"              8224            1.2845e+07              0.031372           6.4225e+06           27.449       
        "inception_3b-5x5"           "Convolution"             76896            1.2042e+08               0.29333           6.0211e+07           339.88       
        "inception_3b-pool_proj"     "Convolution"             16448             2.569e+07              0.062744           1.2845e+07           48.061       
        "inception_4a-1x1"           "Convolution"             92352            3.6127e+07               0.35229           1.8063e+07           80.686       
          ⋮
    
    

    Input Arguments

    collapse all

    Pretrained neural network, specified as a DAGNetwork, SeriesNetwork, yolov2ObjectDetector (Computer Vision Toolbox), or a ssdObjectDetector (Computer Vision Toolbox) object.

    Quantization of yolov2ObjectDetector (Computer Vision Toolbox) and ssdObjectDetector (Computer Vision Toolbox) networks requires a GPU Coder™ license.

    Pretrained neural network, specified as a DAGNetwork, SeriesNetwork, yolov2ObjectDetector (Computer Vision Toolbox), or a ssdObjectDetector (Computer Vision Toolbox) object.

    Quantization of yolov2ObjectDetector (Computer Vision Toolbox) and ssdObjectDetector (Computer Vision Toolbox) networks requires a GPU Coder license.

    Version History

    Introduced in R2022a