CompactRegressionNeuralNetwork
Description
CompactRegressionNeuralNetwork
is a compact version of a RegressionNeuralNetwork
model object. The compact model does not include the data
used for training the regression model. Therefore, you cannot perform some tasks, such as
cross-validation, using the compact model. Use a compact model for tasks such as predicting
the response values of new data.
Creation
Create a CompactRegressionNeuralNetwork
object from a full RegressionNeuralNetwork
model object by using compact
.
Properties
Neural Network Properties
LayerSizes
— Sizes of fully connected layers
positive integer vector
This property is read-only.
Sizes of the fully connected layers in the neural network model, returned as a positive integer vector. The ith element of LayerSizes
is the number of outputs in the ith fully connected layer of the neural network model.
LayerSizes
does not include the size of the final fully connected layer.
This layer always has one output for each response variable.
Data Types: single
| double
LayerWeights
— Learned layer weights
cell array
This property is read-only.
Learned layer weights for the fully connected layers, returned as a cell array. Entry
i in the cell array corresponds to the layer weights for the
fully connected layer i. For example,
Mdl.LayerWeights{1}
returns the weights for the first fully
connected layer of the model Mdl
.
LayerWeights
includes the weights for the final fully connected layer.
Data Types: cell
LayerBiases
— Learned layer biases
cell array
This property is read-only.
Learned layer biases for the fully connected layers, returned as a cell array. Entry
i in the cell array corresponds to the layer biases for the fully
connected layer i. For example, Mdl.LayerBiases{1}
returns the biases for the first fully connected layer of the model
Mdl
.
LayerBiases
includes the biases for the final fully connected layer.
Data Types: cell
Activations
— Activation functions for fully connected layers
'relu'
| 'tanh'
| 'sigmoid'
| 'none'
| cell array of character vectors
This property is read-only.
Activation functions for the fully connected layers of the neural network model, returned as a character vector or cell array of character vectors with values from this table.
Value | Description |
---|---|
"relu" | Rectified linear unit (ReLU) function — Performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is, |
"tanh" | Hyperbolic tangent (tanh) function — Applies the |
"sigmoid" | Sigmoid function — Performs the following operation on each input element: |
"none" | Identity function — Returns each input element without performing any transformation, that is, f(x) = x |
If
Activations
contains only one activation function, then it is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer, which does not have an activation function (OutputLayerActivation
).If
Activations
is an array of activation functions, then the ith element is the activation function for the ith layer of the neural network model.
Data Types: char
| cell
OutputLayerActivation
— Activation function for final fully connected layer
'none'
This property is read-only.
Activation function for final fully connected layer, returned as
'none'
.
Data Properties
PredictorNames
— Predictor variable names
cell array of character vectors
This property is read-only.
Predictor variable names, returned as a cell array of character vectors. The order of the elements of PredictorNames
corresponds to the order in which the predictor names appear in the training data.
Data Types: cell
CategoricalPredictors
— Categorical predictor indices
vector of positive integers | []
This property is read-only.
Categorical predictor indices, returned as a vector of positive integers. Assuming that the predictor data contains observations in rows, CategoricalPredictors
contains index values corresponding to the columns of the predictor data that contain categorical predictors. If none of the predictors are categorical, then this property is empty ([]
).
Data Types: double
ExpandedPredictorNames
— Expanded predictor names
cell array of character vectors
This property is read-only.
Expanded predictor names, returned as a cell array of character vectors. If the model uses encoding for categorical variables, then ExpandedPredictorNames
includes the names that describe the expanded variables. Otherwise, ExpandedPredictorNames
is the same as PredictorNames
.
Data Types: cell
Mu
— Predictor means
numeric vector | []
Since R2023b
This property is read-only.
Predictor means, returned as a numeric vector. If you set Standardize
to
1
or true
when
you train the neural network model, then the length of the
Mu
vector is equal to the
number of expanded predictors (see
ExpandedPredictorNames
). The
vector contains 0
values for dummy variables
corresponding to expanded categorical predictors.
If you set Standardize
to 0
or false
when you train the neural network model, then the Mu
value is an empty vector ([]
).
Data Types: double
ResponseName
— Names of response variables
character vector | cell array of character vectors
This property is read-only.
Names of the response variables, returned as a character vector or cell array of character vectors.
Data Types: char
| cell
ResponseTransform
— Response transformation function
'none'
| function handle
Response transformation function, specified as 'none'
or a
function handle. ResponseTransform
describes how the software
transforms raw response values.
For a MATLAB® function or a function that you define, enter its function handle. For
example, you can enter Mdl.ResponseTransform =
@function
, where
function
accepts the original response values
and returns an output of the same size containing the transformed responses.
Data Types: char
| function_handle
Sigma
— Predictor standard deviations
numeric vector | []
Since R2023b
This property is read-only.
Predictor standard deviations, returned as a numeric vector. If you set
Standardize
to 1
or true
when you train the neural network model, then the length of the
Sigma
vector is equal to the number of expanded predictors (see
ExpandedPredictorNames
). The vector contains
1
values for dummy variables corresponding to expanded
categorical predictors.
If you set Standardize
to 0
or false
when you train the neural network model, then the Sigma
value is an empty vector ([]
).
Data Types: double
Object Functions
Create dlnetwork
dlnetwork (Deep Learning Toolbox) | Deep learning neural network |
Interpret Prediction
lime | Local interpretable model-agnostic explanations (LIME) |
partialDependence | Compute partial dependence |
plotPartialDependence | Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots |
shapley | Shapley values |
Assess Predictive Performance on New Observations
Gather Properties of Compact Regression Neural Network Model
gather | Gather properties of Statistics and Machine Learning Toolbox object from GPU |
Examples
Reduce Size of Regression Neural Network Model
Reduce the size of a full regression neural network model by removing the training data from the model. You can use a compact model to improve memory efficiency.
Load the patients
data set. Create a table from the data set. Each row corresponds to one patient, and each column corresponds to a diagnostic variable. Use the Systolic
variable as the response variable, and the rest of the variables as predictors.
load patients
tbl = table(Age,Diastolic,Gender,Height,Smoker,Weight,Systolic);
Train a regression neural network model using the data. Specify the Systolic
column of tblTrain
as the response variable. Specify to standardize the numeric predictors.
Mdl = fitrnet(tbl,"Systolic","Standardize",true)
Mdl = RegressionNeuralNetwork PredictorNames: {'Age' 'Diastolic' 'Gender' 'Height' 'Smoker' 'Weight'} ResponseName: 'Systolic' CategoricalPredictors: [3 5] ResponseTransform: 'none' NumObservations: 100 LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [619x7 table]
Mdl
is a full RegressionNeuralNetwork
model object.
Reduce the size of the model by using compact
.
compactMdl = compact(Mdl)
compactMdl = CompactRegressionNeuralNetwork LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none'
compactMdl
is a CompactRegressionNeuralNetwork
model object. compactMdl
contains fewer properties than the full model Mdl
.
Display the amount of memory used by each neural network model.
whos("Mdl","compactMdl")
Name Size Bytes Class Attributes Mdl 1x1 52253 RegressionNeuralNetwork compactMdl 1x1 6768 classreg.learning.regr.CompactRegressionNeuralNetwork
The full model is larger than the compact model.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
The
predict
object function supports code generation.
For more information, see Introduction to Code Generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. (since R2024b)
Usage notes and limitations:
The following object functions fully support GPU arrays:
The object functions execute on a GPU if at least one of the following applies:
The model was fitted with GPU arrays.
The predictor data that you pass to the object function is a GPU array.
The response data that you pass to the object function is a GPU array.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2021aR2024b: Specify GPU arrays (requires Parallel Computing Toolbox)
You can fit a CompactRegressionNeuralNetwork
object with GPU arrays by using fitrnet
to fit a
RegressionNeuralNetwork
object to gpuArray
data, and then
passing the object to compact
. Most
CompactRegressionNeuralNetwork
object functions now support GPU array input arguments so
that the functions can execute on a GPU. The object functions that do not support GPU array
inputs are lime
and
shapley
.
R2024b: Convert to dlnetwork
Convert a CompactRegressionNeuralNetwork
object to a dlnetwork
(Deep Learning Toolbox) object using the dlnetwork
function. Use
dlnetwork
objects to make further edits and customize the underlying
neural network of a CompactRegressionNeuralNetwork
object and retrain it using the trainnet
(Deep Learning Toolbox)
function or a custom training loop.
R2023b: Neural network models include standardization properties
Neural network models include Mu
and Sigma
properties that contain the means and standard deviations, respectively, used to standardize the predictors before training. The properties are empty when the fitting function does not perform any standardization.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)