Main Content

loss

Classification error

Syntax

L = loss(obj,X,Y)
L = loss(obj,X,Y,Name,Value)

Description

L = loss(obj,X,Y) returns the classification loss, which is a scalar representing how well obj classifies the data in X, when Y contains the true classifications.

When computing the loss, loss normalizes the class probabilities in Y to the class probabilities used for training, stored in the Prior property of obj.

L = loss(obj,X,Y,Name,Value) returns the loss with additional options specified by one or more Name,Value pair arguments.

Note

If the predictor data X contains any missing values and LossFun is not set to "mincost" or "classiferror", the loss function can return NaN. For more information, see loss can return NaN for predictor data with missing values.

Input Arguments

obj

Discriminant analysis classifier of class ClassificationDiscriminant or CompactClassificationDiscriminant, typically constructed with fitcdiscr.

X

Matrix where each row represents an observation, and each column represents a predictor. The number of columns in X must equal the number of predictors in obj.

Y

Class labels, with the same data type as exists in obj. The number of elements of Y must equal the number of rows of X.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

LossFun

Built-in loss function name (character vector or string scalar in the table) or function handle.

  • The following table lists the available loss functions. Specify one using the corresponding value.

    ValueDescription
    'binodeviance'Binomial deviance
    'classifcost'Observed misclassification cost
    'classiferror'Misclassified rate in decimal
    'exponential'Exponential loss
    'hinge'Hinge loss
    'logit'Logistic loss
    'mincost'Minimal expected misclassification cost (for classification scores that are posterior probabilities)
    'quadratic'Quadratic loss

    'mincost' is appropriate for classification scores that are posterior probabilities. Discriminant analysis models return posterior probabilities as classification scores by default (see predict).

  • Specify your own function using function handle notation.

    Suppose that n be the number of observations in X and K be the number of distinct classes (numel(Mdl.ClassNames)). Your function must have this signature

    lossvalue = lossfun(C,S,W,Cost)
    where:

    • The output argument lossvalue is a scalar.

    • You choose the function name (lossfun).

    • C is an n-by-K logical matrix with rows indicating which class the corresponding observation belongs. The column order corresponds to the class order in Mdl.ClassNames.

      Construct C by setting C(p,q) = 1 if observation p is in class q, for each row. Set all other elements of row p to 0.

    • S is an n-by-K numeric matrix of classification scores. The column order corresponds to the class order in Mdl.ClassNames. S is a matrix of classification scores, similar to the output of predict.

    • W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes them to sum to 1.

    • Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) - eye(K) specifies a cost of 0 for correct classification, and 1 for misclassification.

    Specify your function using 'LossFun',@lossfun.

For more details on loss functions, see Classification Loss.

Default: 'mincost'

Weights

Numeric vector of length N, where N is the number of rows of X. weights are nonnegative. loss normalizes the weights so that observation weights in each class sum to the prior probability of that class. When you supply weights, loss computes weighted classification loss.

Default: ones(N,1)

Output Arguments

L

Classification loss, a scalar. The interpretation of L depends on the values in weights and lossfun.

Examples

expand all

Load Fisher's iris data set.

load fisheriris

Train a discriminant analysis model using all observations in the data.

Mdl = fitcdiscr(meas,species);

Estimate the classification error of the model using the training observations.

L = loss(Mdl,meas,species)
L = 0.0200

Alternatively, if Mdl is not compact, then you can estimate the training-sample classification error by passing Mdl to resubLoss.

More About

expand all

Extended Capabilities

Version History

expand all