Main Content

fitrnet

Train neural network regression model

    Description

    Use fitrnet to train a feedforward, fully connected neural network for regression. The first fully connected layer of the neural network has a connection from the network input (predictor data), and each subsequent layer has a connection from the previous layer. Each fully connected layer multiplies the input by a weight matrix and then adds a bias vector. An activation function follows each fully connected layer, excluding the last. The final fully connected layer produces the network's output, namely predicted response values. For more information, see Neural Network Structure.

    example

    Mdl = fitrnet(Tbl,ResponseVarName) returns a neural network regression model Mdl trained using the predictors in the table Tbl and the response values in the ResponseVarName table variable.

    Mdl = fitrnet(Tbl,formula) returns a neural network regression model trained using the sample data in the table Tbl. The input argument formula is an explanatory model of the response and a subset of the predictor variables in Tbl used to fit Mdl.

    Mdl = fitrnet(Tbl,Y) returns a neural network regression model using the predictor variables in the table Tbl and the response values in vector Y.

    example

    Mdl = fitrnet(X,Y) returns a neural network regression model trained using the predictors in the matrix X and the response values in vector Y.

    example

    Mdl = fitrnet(___,Name,Value) specifies options using one or more name-value arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can adjust the number of outputs and the activation functions for the fully connected layers by specifying the LayerSizes and Activations name-value arguments.

    Examples

    collapse all

    Train a neural network regression model, and assess the performance of the model on a test set.

    Load the carbig data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables Acceleration, Displacement, and so on, as well as the response variable MPG.

    load carbig
    cars = table(Acceleration,Displacement,Horsepower, ...
        Model_Year,Origin,Weight,MPG);

    Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use cvpartition to partition the data.

    rng("default") % For reproducibility of the data partition
    c = cvpartition(length(MPG),"Holdout",0.20);
    trainingIdx = training(c); % Training set indices
    carsTrain = cars(trainingIdx,:);
    testIdx = test(c); % Test set indices
    carsTest = cars(testIdx,:);

    Train a neural network regression model by passing the carsTrain training data to the fitrnet function. For better results, specify to standardize the predictor data.

    Mdl = fitrnet(carsTrain,"MPG","Standardize",true)
    Mdl = 
      RegressionNeuralNetwork
               PredictorNames: {'Acceleration'  'Displacement'  'Horsepower'  'Model_Year'  'Origin'  'Weight'}
                 ResponseName: 'MPG'
        CategoricalPredictors: 5
            ResponseTransform: 'none'
              NumObservations: 314
                   LayerSizes: 10
                  Activations: 'relu'
        OutputLayerActivation: 'linear'
                       Solver: 'LBFGS'
              ConvergenceInfo: [1×1 struct]
              TrainingHistory: [1000×7 table]
    
    
      Properties, Methods
    
    

    Mdl is a trained RegressionNeuralNetwork model. You can use dot notation to access the properties of Mdl. For example, you can specify Mdl.TrainingHistory to get more information about the training history of the neural network model.

    Evaluate the performance of the regression model on the test set by computing the test mean squared error (MSE). Smaller MSE values indicate better performance.

    testMSE = loss(Mdl,carsTest,"MPG")
    testMSE = 16.6154
    

    Specify the structure of the neural network regression model, including the size of the fully connected layers.

    Load the carbig data set, which contains measurements of cars made in the 1970s and early 1980s. Create a matrix X containing the predictor variables Acceleration, Cylinders, and so on. Store the response variable MPG in the variable Y.

    load carbig
    X = [Acceleration Cylinders Displacement Weight];
    Y = MPG;

    Partition the data into training data (XTrain and YTrain) and test data (XTest and YTest). Reserve approximately 20% of the observations for testing, and use the rest of the observations for training.

    rng("default") % For reproducibility of the partition
    c = cvpartition(length(Y),"Holdout",0.20);
    trainingIdx = training(c); % Indices for the training set
    XTrain = X(trainingIdx,:);
    YTrain = Y(trainingIdx);
    testIdx = test(c); % Indices for the test set
    XTest = X(testIdx,:);
    YTest = Y(testIdx);

    Train a neural network regression model. Specify to standardize the predictor data, and to have 30 outputs in the first fully connected layer and 10 outputs in the second fully connected layer. By default, both layers use a rectified linear unit (ReLU) activation function. You can change the activation functions for the fully connected layers by using the Activations name-value argument.

    Mdl = fitrnet(XTrain,YTrain,"Standardize",true, ...
        "LayerSizes",[30 10])
    Mdl = 
      RegressionNeuralNetwork
                 ResponseName: 'Y'
        CategoricalPredictors: []
            ResponseTransform: 'none'
              NumObservations: 318
                   LayerSizes: [30 10]
                  Activations: 'relu'
        OutputLayerActivation: 'linear'
                       Solver: 'LBFGS'
              ConvergenceInfo: [1×1 struct]
              TrainingHistory: [1000×7 table]
    
    
      Properties, Methods
    
    

    Access the weights and biases for the fully connected layers of the trained model by using the LayerWeights and LayerBiases properties of Mdl. The first two elements of each property correspond to the values for the first two fully connected layers, and the third element corresponds to the values for the final fully connected layer for regression. For example, display the weights and biases for the first fully connected layer.

    Mdl.LayerWeights{1}
    ans = 30×4
    
       -1.0617    0.1287    0.0797    0.4648
       -0.6497   -1.4565   -2.6026    2.6962
       -0.6420    0.2744   -0.0234   -0.0252
       -1.9727   -0.4665   -0.5833    0.9371
       -0.4373    0.1607    0.3930    0.7859
        0.5091   -0.0032   -0.6503   -1.6694
        0.0123   -0.2624   -2.2928   -1.0965
       -0.1386    1.2747    0.4085    0.5395
       -0.1755    1.5641   -3.1896   -1.1336
        0.4401    0.4942    1.8957   -1.1617
          ⋮
    
    
    Mdl.LayerBiases{1}
    ans = 30×1
    
       -1.3086
       -1.6205
       -0.7815
        1.5382
       -0.5256
        1.2394
       -2.3078
       -1.0709
       -1.8898
        1.9443
          ⋮
    
    

    The final fully connected layer has one output. The number of layer outputs corresponds to the first dimension of the layer weights and layer biases.

    size(Mdl.LayerWeights{end})
    ans = 1×2
    
         1    10
    
    
    size(Mdl.LayerBiases{end})
    ans = 1×2
    
         1     1
    
    

    To estimate the performance of the trained model, compute the test set mean squared error (MSE) for Mdl. Smaller MSE values indicate better performance.

    testMSE = loss(Mdl,XTest,YTest)
    testMSE = 17.2022
    

    Compare the predicted test set response values to the true response values. Plot the predicted miles per gallon (MPG) along the vertical axis and the true MPG along the horizontal axis. Points on the reference line indicate correct predictions. A good model produces predictions that are scattered near the line.

    testPredictions = predict(Mdl,XTest);
    plot(YTest,testPredictions,".")
    hold on
    plot(YTest,YTest)
    hold off
    xlabel("True MPG")
    ylabel("Predicted MPG")

    At each iteration of the training process, compute the validation loss of the neural network. Stop the training process early if the validation loss reaches a reasonable minimum.

    Load the patients data set. Create a table from the data set. Each row corresponds to one patient, and each column corresponds to a diagnostic variable. Use the Systolic variable as the response variable, and the rest of the variables as predictors.

    load patients
    tbl = table(Age,Diastolic,Gender,Height,Smoker,Weight,Systolic);

    Separate the data into a training set tblTrain and a validation set tblValidation. The software reserves approximately 30% of the observations for the validation data set and uses the rest of the observations for the training data set.

    rng("default") % For reproducibility of the partition
    c = cvpartition(size(tbl,1),"Holdout",0.30);
    trainingIndices = training(c);
    validationIndices = test(c);
    tblTrain = tbl(trainingIndices,:);
    tblValidation = tbl(validationIndices,:);

    Train a neural network regression model by using the training set. Specify the Systolic column of tblTrain as the response variable. Evaluate the model at each iteration by using the validation set. Specify to display the training information at each iteration by using the Verbose name-value argument. By default, the training process ends early if the validation loss is greater than or equal to the minimum validation loss computed so far, six times in a row. To change the number of times the validation loss is allowed to be greater than or equal to the minimum, specify the ValidationPatience name-value argument.

    Mdl = fitrnet(tblTrain,"Systolic", ...
        "ValidationData",tblValidation, ...
        "Verbose",1);
    |==========================================================================================|
    | Iteration  | Train Loss | Gradient   | Step       | Iteration  | Validation | Validation |
    |            |            |            |            | Time (sec) | Loss       | Checks     |
    |==========================================================================================|
    |           1|  516.021993| 3220.880047|    0.644473|    0.005193|  568.289202|           0|
    |           2|  313.056754|  229.931405|    0.067026|    0.002658|  304.023695|           0|
    |           3|  308.461807|  277.166516|    0.011122|    0.001363|  296.935608|           0|
    |           4|  262.492770|  844.627934|    0.143022|    0.000531|  240.559640|           0|
    |           5|  169.558740| 1131.714363|    0.336463|    0.000652|  152.531663|           0|
    |           6|   89.134368|  362.084104|    0.382677|    0.001059|   83.147478|           0|
    |           7|   83.309729|  994.830303|    0.199923|    0.000515|   76.634122|           0|
    |           8|   70.731524|  327.637362|    0.041366|    0.000361|   66.421750|           0|
    |           9|   66.650091|  124.369963|    0.125232|    0.000380|   65.914063|           0|
    |          10|   66.404753|   36.699328|    0.016768|    0.000363|   65.357335|           0|
    |==========================================================================================|
    | Iteration  | Train Loss | Gradient   | Step       | Iteration  | Validation | Validation |
    |            |            |            |            | Time (sec) | Loss       | Checks     |
    |==========================================================================================|
    |          11|   66.357143|   46.712988|    0.009405|    0.001130|   65.306106|           0|
    |          12|   66.268225|   54.079264|    0.007953|    0.001023|   65.234391|           0|
    |          13|   65.788550|   99.453225|    0.030942|    0.000436|   64.869708|           0|
    |          14|   64.821095|  186.344649|    0.048078|    0.000295|   64.191533|           0|
    |          15|   62.353896|  319.273873|    0.107160|    0.000290|   62.618374|           0|
    |          16|   57.836593|  447.826470|    0.184985|    0.000287|   60.087065|           0|
    |          17|   51.188884|  524.631067|    0.253062|    0.000287|   56.646294|           0|
    |          18|   41.755601|  189.072516|    0.318515|    0.000286|   49.046823|           0|
    |          19|   37.539854|   78.602559|    0.382284|    0.000290|   44.633562|           0|
    |          20|   36.845322|  151.837884|    0.211286|    0.000286|   47.291367|           1|
    |==========================================================================================|
    | Iteration  | Train Loss | Gradient   | Step       | Iteration  | Validation | Validation |
    |            |            |            |            | Time (sec) | Loss       | Checks     |
    |==========================================================================================|
    |          21|   36.218289|   62.826818|    0.142748|    0.000362|   46.139104|           2|
    |          22|   35.776921|   53.606315|    0.215188|    0.000321|   46.170460|           3|
    |          23|   35.729085|   24.400342|    0.060096|    0.001023|   45.318023|           4|
    |          24|   35.622031|    9.602277|    0.121153|    0.000289|   45.791861|           5|
    |          25|   35.573317|   10.735070|    0.126854|    0.000291|   46.062826|           6|
    |==========================================================================================|
    

    Create a plot that compares the training mean squared error (MSE) and the validation MSE at each iteration. By default, fitrnet stores the loss information inside the TrainingHistory property of the object Mdl. You can access this information by using dot notation.

    iteration = Mdl.TrainingHistory.Iteration;
    trainLosses = Mdl.TrainingHistory.TrainingLoss;
    valLosses = Mdl.TrainingHistory.ValidationLoss;
    plot(iteration,trainLosses,iteration,valLosses)
    legend(["Training","Validation"])
    xlabel("Iteration")
    ylabel("Mean Squared Error")

    Check the iteration that corresponds to the minimum validation MSE. The final returned model Mdl is the model trained at this iteration.

    [~,minIdx] = min(valLosses);
    iteration(minIdx)
    ans = 19
    

    Assess the cross-validation loss of neural network models with different regularization strengths, and choose the regularization strength corresponding to the best performing model.

    Load the carbig data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables Acceleration, Displacement, and so on, as well as the response variable MPG. Remove observations from the table with missing values.

    load carbig
    tbl = table(Acceleration,Displacement,Horsepower, ...
        Model_Year,Origin,Weight,MPG);
    cars = rmmissing(tbl);

    Create a cvpartition object for 5-fold cross-validation. cvp partitions the data into five folds, where each fold has roughly the same number of observations. Set the random seed to the default value for reproducibility of the partition.

    rng("default")
    n = size(cars,1);
    cvp = cvpartition(n,"KFold",5);

    Compute the cross-validation mean squared error (MSE) for neural network regression models with different regularization strengths. Try regularization strengths on the order of 1/n, where n is the number of observations. Specify to standardize the data before training the neural network models.

    1/n
    ans = 0.0026
    
    lambda = (0:0.5:5)*1e-3;
    cvloss = zeros(length(lambda),1);
    for i = 1:length(lambda)
        cvMdl = fitrnet(cars,"MPG","Lambda",lambda(i), ...
            "CVPartition",cvp,"Standardize",true);
        cvloss(i) = kfoldLoss(cvMdl);
    end

    Plot the results. Find the regularization strength corresponding to the lowest cross-validation MSE.

    plot(lambda,cvloss)
    xlabel("Regularization Strength")
    ylabel("Cross-Validation Loss")

    [~,idx] = min(cvloss);
    bestLambda = lambda(idx)
    bestLambda = 0
    

    Train a neural network regression model using the bestLambda regularization strength.

    Mdl = fitrnet(cars,"MPG","Lambda",bestLambda, ...
        "Standardize",true)
    Mdl = 
      RegressionNeuralNetwork
               PredictorNames: {'Acceleration'  'Displacement'  'Horsepower'  'Model_Year'  'Origin'  'Weight'}
                 ResponseName: 'MPG'
        CategoricalPredictors: 5
            ResponseTransform: 'none'
              NumObservations: 392
                   LayerSizes: 10
                  Activations: 'relu'
        OutputLayerActivation: 'linear'
                       Solver: 'LBFGS'
              ConvergenceInfo: [1×1 struct]
              TrainingHistory: [1000×7 table]
    
    
      Properties, Methods
    
    

    Input Arguments

    collapse all

    Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain one additional column for the response variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

    • If Tbl contains the response variable, and you want to use all remaining variables in Tbl as predictors, then specify the response variable by using ResponseVarName.

    • If Tbl contains the response variable, and you want to use only a subset of the remaining variables in Tbl as predictors, then specify a formula by using formula.

    • If Tbl does not contain the response variable, then specify a response variable by using Y. The length of the response variable and the number of rows in Tbl must be equal.

    Data Types: table

    Response variable name, specified as the name of a variable in Tbl. The response variable must be a numeric vector.

    You must specify ResponseVarName as a character vector or string scalar. For example, if Tbl stores the response variable Y as Tbl.Y, then specify it as 'Y'. Otherwise, the software treats all columns of Tbl, including Y, as predictors when training the model.

    Data Types: char | string

    Explanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form 'Y~x1+x2+x3'. In this form, Y represents the response variable, and x1, x2, and x3 represent the predictor variables.

    To specify a subset of variables in Tbl as predictors for training the model, use a formula. If you specify a formula, then the software does not use any variables in Tbl that do not appear in formula.

    The variable names in the formula must be both variable names in Tbl (Tbl.Properties.VariableNames) and valid MATLAB® identifiers. You can verify the variable names in Tbl by using the isvarname function. If the variable names are not valid, then you can convert them by using the matlab.lang.makeValidName function.

    Data Types: char | string

    Response data, specified as a numeric vector. The length of Y must be equal to the number of observations in X or Tbl.

    Data Types: single | double

    Predictor data used to train the model, specified as a numeric matrix.

    By default, the software treats each row of X as one observation, and each column as one predictor.

    The length of Y and the number of observations in X must be equal.

    To specify the names of the predictors in the order of their appearance in X, use the PredictorNames name-value argument.

    Note

    If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time.

    Data Types: single | double

    Note

    The software treats NaN, empty character vector (''), empty string (""), <missing>, and <undefined> elements as missing values, and removes observations with any of these characteristics:

    • Missing value in the response (for example, Y or ValidationData{2})

    • At least one missing value in a predictor observation (for example, row in X or ValidationData{1})

    • NaN value or 0 weight (for example, value in Weights or ValidationData{3})

    Name-Value Pair Arguments

    Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

    Example: fitrnet(X,Y,'LayerSizes',[10 10],'Activations',["relu","tanh"]) specifies to create a neural network with two fully connected layers, each with 10 outputs. The first layer uses a rectified linear unit (ReLU) activation function, and the second uses a hyperbolic tangent activation function.
    Neural Network Options

    collapse all

    Sizes of the fully connected layers in the neural network model, specified as a positive integer vector. The ith element of LayerSizes is the number of outputs in the ith fully connected layer of the neural network model.

    LayerSizes does not include the size of the final fully connected layer. For more information, see Neural Network Structure.

    Example: 'LayerSizes',[100 25 10]

    Activation functions for the fully connected layers of the neural network model, specified as a character vector, string scalar, string array, or cell array of character vectors with values from this table.

    ValueDescription
    'relu'

    Rectified linear unit (ReLU) function — Performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is,

    f(x)={x,x00,x<0

    'tanh'

    Hyperbolic tangent (tanh) function — Applies the tanh function to each input element

    'sigmoid'

    Sigmoid function — Performs the following operation on each input element:

    f(x)=11+ex

    'none'

    Identity function — Returns each input element without performing any transformation, that is, f(x) = x

    • If you specify one activation function only, then Activations is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer (see Neural Network Structure).

    • If you specify an array of activation functions, then the ith element of Activations is the activation function for the ith layer of the neural network model.

    Example: 'Activations','sigmoid'

    Function to initialize the fully connected layer weights, specified as 'glorot' or 'he'.

    ValueDescription
    'glorot'Initialize the weights with the Glorot initializer [1] (also known as the Xavier initializer). For each layer, the Glorot initalizer independently samples from a uniform distribution with zero mean and variable 2/(I+O), where I is the input size and O is the output size for the layer.
    'he'Initialize the weights with the He initializer [2]. For each layer, the He initializer samples from a normal distribution with zero mean and variance 2/I, where I is the input size for the layer.

    Example: 'LayerWeightsFunction','he'

    Type of initial fully connected layer biases, specified as 'zeros' or 'ones'.

    • If you specify the value 'zeros', then each fully connected layer has an initial bias of 0.

    • If you specify the value 'ones', then each fully connected layer has an initial bias of 1.

    Example: 'LayerBiasesInitializer','ones'

    Data Types: char | string

    Predictor data observation dimension, specified as 'rows' or 'columns'.

    Note

    If you orient your predictor matrix so that observations correspond to columns and specify 'ObservationsIn','columns', then you might experience a significant reduction in computation time. You cannot specify 'ObservationsIn','columns' for predictor data in a table.

    Example: 'ObservationsIn','columns'

    Data Types: char | string

    Regularization term strength, specified as a nonnegative scalar. The software composes the objective function for minimization from the mean squared error (MSE) loss function and the ridge (L2) penalty term.

    Example: 'Lambda',1e-4

    Data Types: single | double

    Flag to standardize the predictor data, specified as a numeric or logical 0 (false) or 1 (true). If you set Standardize to true, then the software centers and scales each numeric predictor variable by the corresponding column mean and standard deviation. The software does not standardize the categorical predictors.

    Example: 'Standardize',true

    Data Types: single | double | logical

    Convergence Control Options

    collapse all

    Verbosity level, specified as 0 or 1. The 'Verbose' name-value argument controls the amount of diagnostic information that fitrnet displays at the command line.

    ValueDescription
    0fitrnet does not display diagnostic information.
    1fitrnet periodically displays diagnostic information.

    By default, StoreHistory is set to true and fitrnet stores the diagnostic information inside of Mdl. Use Mdl.TrainingHistory to access the diagnostic information.

    Example: 'Verbose',1

    Data Types: single | double

    Frequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer scalar. A value of 1 indicates to print diagnostic information at every iteration.

    Note

    To use this name-value argument, set Verbose to 1.

    Example: 'VerboseFrequency',5

    Data Types: single | double

    Flag to store the training history, specified as a numeric or logical 0 (false) or 1 (true). If StoreHistory is set to true, then the software stores diagnostic information inside of Mdl, which you can access by using Mdl.TrainingHistory.

    Example: 'StoreHistory',false

    Data Types: single | double | logical

    Maximum number of training iterations, specified as a positive integer scalar.

    The software returns a trained model regardless of whether the training routine successfully converges. Mdl.ConvergenceInfo contains convergence information.

    Example: 'IterationLimit',1e8

    Data Types: single | double

    Relative gradient tolerance, specified as a nonnegative scalar.

    Let t be the loss function at training iteration t, t be the gradient of the loss function with respect to the weights and biases at iteration t, and 0 be the gradient of the loss function at an initial point. If max|t|aGradientTolerance, where a=max(1,min|t|,max|0|), then the training process terminates.

    Example: 'GradientTolerance',1e-5

    Data Types: single | double

    Loss tolerance, specified as a nonnegative scalar.

    If the function loss at some iteration is smaller than LossTolerance, then the training process terminates.

    Example: 'LossTolerance',1e-8

    Data Types: single | double

    Step size tolerance, specified as a nonnegative scalar.

    If the step size at some iteration is smaller than StepTolerance, then the training process terminates.

    Example: 'StepTolerance',1e-4

    Data Types: single | double

    Validation data for training convergence detection, specified as a cell array or table.

    During the training process, the software periodically estimates the validation loss by using ValidationData. If the validation loss increases more than ValidationPatience times in a row, then the software terminates the training.

    You can specify ValidationData as a table if you use a table Tbl of predictor data that contains the response variable. In this case, ValidationData must contain the same predictors and response contained in Tbl. The software does not apply weights to observations, even if Tbl contains a vector of weights. To specify weights, you must specify ValidationData as a cell array.

    If you specify ValidationData as a cell array, then it must have the following format:

    • ValidationData{1} must have the same data type and orientation as the predictor data. That is, if you use a predictor matrix X, then ValidationData{1} must be an m-by-p or p-by-m matrix of predictor data that has the same orientation as X. The predictor variables in the training data X and ValidationData{1} must correspond. Similarly, if you use a predictor table Tbl of predictor data, then ValidationData{1} must be a table containing the same predictor variables contained in Tbl. The number of observations in ValidationData{1} and the predictor data can vary.

    • ValidationData{2} must match the data type and format of the response variable, either Y or ResponseVarName. If ValidationData{2} is an array of responses, then it must have the same number of elements as the number of observations in ValidationData{1}. If ValidationData{1} is a table, then ValidationData{2} can be the name of the response variable in the table. If you want to use the same ResponseVarName or formula, you can specify ValidationData{2} as [].

    • Optionally, you can specify ValidationData{3} as an m-dimensional numeric vector of observation weights or the name of a variable in the table ValidationData{1} that contains observation weights. The software normalizes the weights with the validation data so that they sum to 1.

    If you specify ValidationData and want to display the validation loss at the command line, set Verbose to 1.

    Number of iterations between validation evaluations, specified as a positive integer scalar. A value of 1 indicates to evaluate validation metrics at every iteration.

    Note

    To use this name-value argument, you must specify ValidationData.

    Example: 'ValidationFrequency',5

    Data Types: single | double

    Stopping condition for validation evaluations, specified as a nonnegative integer scalar. Training stops if the validation loss is greater than or equal to the minimum validation loss computed so far, ValidationPatience times in a row. You can check the Mdl.TrainingHistory table to see the running total of times that the validation loss is greater than or equal to the minimum (Validation Checks).

    Example: 'ValidationPatience',10

    Data Types: single | double

    Other Regression Options

    collapse all

    Categorical predictors list, specified as one of the values in this table. The descriptions assume that the predictor data has observations in rows and predictors in columns.

    ValueDescription
    Vector of positive integers

    Each entry in the vector is an index value corresponding to the column of the predictor data that contains a categorical variable. The index values are between 1 and p, where p is the number of predictors used to train the model.

    If fitrnet uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. The 'CategoricalPredictors' values do not count the response variable, the observation weight variable, and any other variables that the function does not use.

    Logical vector

    A true entry means that the corresponding column of predictor data is a categorical variable. The length of the vector is p.

    Character matrixEach row of the matrix is the name of a predictor variable. The names must match the entries in PredictorNames. Pad the names with extra blanks so each row of the character matrix has the same length.
    String array or cell array of character vectorsEach element in the array is the name of a predictor variable. The names must match the entries in PredictorNames.
    'all'All predictors are categorical.

    By default, if the predictor data is in a table (Tbl), fitrnet assumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (X), fitrnet assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the 'CategoricalPredictors' name-value argument.

    For the identified categorical predictors, fitrnet creates dummy variables using two different schemes, depending on whether a categorical variable is unordered or ordered. For an unordered categorical variable, fitrnet creates one dummy variable for each level of the categorical variable. For an ordered categorical variable, fitrnet creates one less dummy variable than the number of categories. For details, see Automatic Creation of Dummy Variables.

    Example: 'CategoricalPredictors','all'

    Data Types: single | double | logical | char | string | cell

    Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of 'PredictorNames' depends on the way you supply the training data.

    • If you supply X and Y, then you can use 'PredictorNames' to assign names to the predictor variables in X.

      • The order of the names in PredictorNames must correspond to the predictor order in X. Assuming that X has the default orientation, with observations in rows and predictors in columns, PredictorNames{1} is the name of X(:,1), PredictorNames{2} is the name of X(:,2), and so on. Also, size(X,2) and numel(PredictorNames) must be equal.

      • By default, PredictorNames is {'x1','x2',...}.

    • If you supply Tbl, then you can use 'PredictorNames' to choose which predictor variables to use in training. That is, fitrnet uses only the predictor variables in PredictorNames and the response variable during training.

      • PredictorNames must be a subset of Tbl.Properties.VariableNames and cannot include the name of the response variable.

      • By default, PredictorNames contains the names of all predictor variables.

      • A good practice is to specify the predictors for training using either 'PredictorNames' or formula, but not both.

    Example: 'PredictorNames',{'SepalLength','SepalWidth','PetalLength','PetalWidth'}

    Data Types: string | cell

    Response variable name, specified as a character vector or string scalar.

    • If you supply Y, then you can use 'ResponseName' to specify a name for the response variable.

    • If you supply ResponseVarName or formula, then you cannot use 'ResponseName'.

    Example: 'ResponseName','response'

    Data Types: char | string

    Observation weights, specified as a nonnegative numeric vector or the name of a variable in Tbl. The software weights each observation in X or Tbl with the corresponding value in Weights. The length of Weights must equal the number of observations in X or Tbl.

    If you specify the input data as a table Tbl, then Weights can be the name of a variable in Tbl that contains a numeric vector. In this case, you must specify Weights as a character vector or string scalar. For example, if weights vector W is stored as Tbl.W, then specify it as 'W'. Otherwise, the software treats all columns of Tbl, including W, as predictors when training the model.

    By default, Weights is ones(n,1), where n is the number of observations in X or Tbl.

    fitrnet normalizes the weights to sum to 1.

    Data Types: single | double | char | string

    Cross-Validation Options

    collapse all

    Flag to train a cross-validated model, specified as 'on' or 'off'.

    If you specify 'on', then the software trains a cross-validated model with 10 folds.

    You can override this cross-validation setting using the CVPartition, Holdout, KFold, or Leaveout name-value argument. You can use only one cross-validation name-value argument at a time to create a cross-validated model.

    Alternatively, cross-validate later by passing Mdl to crossval.

    Example: 'Crossval','on'

    Data Types: char | string

    Cross-validation partition, specified as a cvpartition partition object created by cvpartition. The partition object specifies the type of cross-validation and the indexing for the training and validation sets.

    To create a cross-validated model, you can specify only one of these four name-value arguments: CVPartition, Holdout, KFold, or Leaveout.

    Example: Suppose you create a random partition for 5-fold cross-validation on 500 observations by using cvp = cvpartition(500,'KFold',5). Then, you can specify the cross-validated model by using 'CVPartition',cvp.

    Fraction of the data used for holdout validation, specified as a scalar value in the range (0,1). If you specify 'Holdout',p, then the software completes these steps:

    1. Randomly select and reserve p*100% of the data as validation data, and train the model using the rest of the data.

    2. Store the compact, trained model in the Trained property of the cross-validated model.

    To create a cross-validated model, you can specify only one of these four name-value arguments: CVPartition, Holdout, KFold, or Leaveout.

    Example: 'Holdout',0.1

    Data Types: double | single

    Number of folds to use in a cross-validated model, specified as a positive integer value greater than 1. If you specify 'KFold',k, then the software completes these steps:

    1. Randomly partition the data into k sets.

    2. For each set, reserve the set as validation data, and train the model using the other k – 1 sets.

    3. Store the k compact, trained models in a k-by-1 cell vector in the Trained property of the cross-validated model.

    To create a cross-validated model, you can specify only one of these four name-value arguments: CVPartition, Holdout, KFold, or Leaveout.

    Example: 'KFold',5

    Data Types: single | double

    Leave-one-out cross-validation flag, specified as 'on' or 'off'. If you specify 'Leaveout','on', then for each of the n observations (where n is the number of observations, excluding missing observations, specified in the NumObservations property of the model), the software completes these steps:

    1. Reserve the one observation as validation data, and train the model using the other n – 1 observations.

    2. Store the n compact, trained models in an n-by-1 cell vector in the Trained property of the cross-validated model.

    To create a cross-validated model, you can specify only one of these four name-value arguments: CVPartition, Holdout, KFold, or Leaveout.

    Example: 'Leaveout','on'

    Output Arguments

    collapse all

    Trained neural network regression model, returned as a RegressionNeuralNetwork or RegressionPartitionedModelobject.

    If you set any of the name-value arguments CrossVal, CVPartition, Holdout, KFold, or Leaveout, then Mdl is a RegressionPartitionedModel object. Otherwise, Mdl is a RegressionNeuralNetwork model.

    To reference properties of Mdl, use dot notation.

    More About

    collapse all

    Neural Network Structure

    The default neural network regression model has the following layer structure.

    StructureDescription

    Default neural network regression model structure, with one customizable fully connected layer with a ReLU activation

    Input — This layer corresponds to the predictor data in Tbl or X.

    First fully connected layer — This layer has 10 outputs by default.

    • You can widen the layer or add more fully connected layers to the network by specifying the LayerSizes name-value argument.

    • You can find the weights and biases for this layer in the Mdl.LayerWeights{1} and Mdl.LayerBiases{1} properties of Mdl, respectively.

    ReLU activation function — fitrnet applies this activation function to the first fully connected layer.

    • You can change the activation function by specifying the Activations name-value argument.

    Final fully connected layer — This layer has one output.

    • You can find the weights and biases for this layer in the Mdl.LayerWeights{end} and Mdl.LayerBiases{end} properties of Mdl, respectively.

    Output — This layer corresponds to the predicted response values.

    For an example that shows how a regression neural network model with this layer structure returns predictions, see Predict Using Layer Structure of Regression Neural Network Model.

    Tips

    • Always try to standardize the numeric predictors (see Standardize). Standardization makes predictors insensitive to the scales on which they are measured.

    Algorithms

    collapse all

    Training Solver

    fitrnet uses a limited-memory Broyden-Flecter-Goldfarb-Shanno quasi-Newton algorithm (LBFGS) [3] as its loss function minimization technique, where the software minimizes the mean squared error (MSE).

    References

    [1] Glorot, Xavier, and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks.” In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. 2010.

    [2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. 2015.

    [3] Nocedal, J. and S. J. Wright. Numerical Optimization, 2nd ed., New York: Springer, 2006.

    Introduced in R2021a