Classification loss for cross-validated classification model
returns the classification loss obtained by the cross-validated classification model
L
= kfoldLoss(CVMdl
)CVMdl
. For every fold, kfoldLoss
computes the
classification loss for validation-fold observations using a classifier trained on
training-fold observations. CVMdl.X
and CVMdl.Y
contain both sets of observations.
returns the classification loss with additional options specified by one or more name-value
arguments. For example, you can specify a custom loss function.L
= kfoldLoss(CVMdl
,Name,Value
)
Load the ionosphere
data set.
load ionosphere
Grow a classification tree.
tree = fitctree(X,Y);
Cross-validate the classification tree using 10-fold cross-validation.
cvtree = crossval(tree);
Estimate the cross-validated classification error.
L = kfoldLoss(cvtree)
L = 0.1083
Load the ionosphere
data set.
load ionosphere
Train a classification ensemble of 100 decision trees using AdaBoostM1. Specify tree stumps as the weak learners.
t = templateTree('MaxNumSplits',1); ens = fitcensemble(X,Y,'Method','AdaBoostM1','Learners',t);
Cross-validate the ensemble using 10-fold cross-validation.
cvens = crossval(ens);
Estimate the cross-validated classification error.
L = kfoldLoss(cvens)
L = 0.0655
kfoldLoss
Train a cross-validated generalized additive model (GAM) with 10 folds. Then, use kfoldLoss
to compute cumulative cross-validation classification errors (misclassification rate in decimal). Use the errors to determine the optimal number of trees per predictor (linear term for predictor) and the optimal number of trees per interaction term.
Alternatively, you can find optimal values of fitcgam
name-value arguments by using the bayesopt
function. For an example, see Optimize Cross-Validated GAM Using bayesopt.
Load the ionosphere
data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b'
) or good ('g'
).
load ionosphere
Create a cross-validated GAM by using the default cross-validation option. Specify the 'CrossVal'
name-value argument as 'on'
. Specify to include all available interaction terms whose p-values are not greater than 0.05.
rng('default') % For reproducibility CVMdl = fitcgam(X,Y,'CrossVal','on','Interactions','all','MaxPValue',0.05);
If you specify 'Mode'
as 'cumulative'
for kfoldLoss
, then the function returns cumulative errors, which are the average errors across all folds obtained using the same number of trees for each fold. Display the number of trees for each fold.
CVMdl.NumTrainedPerFold
ans = struct with fields:
PredictorTrees: [65 64 59 61 60 66 65 62 64 61]
InteractionTrees: [1 2 2 2 2 1 2 2 2 2]
kfoldLoss
can compute cumulative errors using up to 59 predictor trees and one interaction tree.
Plot the cumulative, 10-fold cross-validated, classification error (misclassification rate in decimal). Specify 'IncludeInteractions'
as false
to exclude interaction terms from the computation.
L_noInteractions = kfoldLoss(CVMdl,'Mode','cumulative','IncludeInteractions',false); figure plot(0:min(CVMdl.NumTrainedPerFold.PredictorTrees),L_noInteractions)
The first element of L_noInteractions
is the average error over all folds obtained using only the intercept (constant) term. The (J+1
)th element of L_noInteractions
is the average error obtained using the intercept term and the first J
predictor trees per linear term. Plotting the cumulative loss allows you to monitor how the error changes as the number of predictor trees in GAM increases.
Find the minimum error and the number of predictor trees used to achieve the minimum error.
[M,I] = min(L_noInteractions)
M = 0.0655
I = 23
The GAM achieves the minimum error when it includes 22 predictor trees.
Compute the cumulative classification error using both linear terms and interaction terms.
L = kfoldLoss(CVMdl,'Mode','cumulative')
L = 2×1
0.0712
0.0712
The first element of L
is the average error over all folds obtained using the intercept (constant) term and all predictor trees per linear term. The second element of L
is the average error obtained using the intercept term, all predictor trees per linear term, and one interaction tree per interaction term. The error does not decrease when interaction terms are added.
If you are satisfied with the error when the number of predictor trees is 22, you can create a predictive model by training the univariate GAM again and specifying 'NumTreesPerPredictor',22
without cross-validation.
CVMdl
— Cross-validated partitioned classifierClassificationPartitionedModel
object | ClassificationPartitionedEnsemble
object | ClassificationPartitionedGAM
objectCross-validated partitioned classifier, specified as a ClassificationPartitionedModel
, ClassificationPartitionedEnsemble
, or ClassificationPartitionedGAM
object. You can create the object in two ways:
Pass a trained classification model listed in the following table to its
crossval
object function.
Train a classification model using a function listed in the following table and specify one of the cross-validation name-value arguments for the function.
Specify optional
comma-separated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
kfoldLoss(CVMdl,'Folds',[1 2 3 5])
specifies to use the
first, second, third, and fifth folds to compute the classification loss, but to exclude the
fourth fold.'Folds'
— Fold indices to use1:CVMdl.KFold
(default) | positive integer vectorFold indices to use, specified as a positive integer vector. The elements of Folds
must be within the range from 1
to CVMdl.KFold
.
The software uses only the folds specified in Folds
.
Example: 'Folds',[1 4 10]
Data Types: single
| double
'IncludeInteractions'
— Flag to include interaction termstrue
| false
Flag to include interaction terms of the model, specified as true
or
false
. This argument is valid only for a generalized
additive model (GAM). That is, you can specify this argument only when
CVMdl
is ClassificationPartitionedGAM
.
The default value is true
if the models in
CVMdl
(CVMdl.Trained
) contain
interaction terms. The value must be false
if the models do not
contain interaction terms.
Data Types: logical
'LossFun'
— Loss function'classiferror'
| 'binodeviance'
| 'crossentropy'
| 'exponential'
| 'hinge'
| 'logit'
| 'mincost'
| 'quadratic'
| function handleLoss function, specified as a built-in loss function name or a function handle.
The default loss function depends on the model type of CVMdl
.
The default value is 'classiferror'
if the model type is
an ensemble, generalized additive model, neural network, or support vector
machine classifier.
The default value is 'mincost'
if the model type is a
discriminant analysis, k-nearest neighbor, naive Bayes, or
tree classifier.
'classiferror'
and 'mincost'
are
equivalent when you use the default cost matrix. See Algorithms for more information.
This table lists the available loss functions. Specify one using its corresponding character vector or string scalar.
Value | Description |
---|---|
'binodeviance' | Binomial deviance |
'classiferror' | Misclassified rate in decimal |
'crossentropy' | Cross-entropy loss (for neural networks only) |
'exponential' | Exponential loss |
'hinge' | Hinge loss |
'logit' | Logistic loss |
'mincost' | Minimal expected misclassification cost (for classification scores that are posterior probabilities) |
'quadratic' | Quadratic loss |
'mincost'
is appropriate for classification
scores that are posterior probabilities. The predict
and
kfoldPredict
functions of discriminant analysis,
generalized additive model, k-nearest neighbor, naive Bayes,
neural network, and tree classifiers return such scores by default.
For ensemble models that use 'Bag'
or
'Subspace'
methods, classification scores are
posterior probabilities by default. For ensemble models that use
'AdaBoostM1'
, 'AdaBoostM2'
,
GentleBoost
, or 'LogitBoost'
methods, you can use posterior probabilities as classification scores by
specifying the double-logit score transform. For example,
enter:
CVMdl.ScoreTransform = 'doublelogit';
For SVM models, you can specify to use posterior probabilities as
classification scores by setting 'FitPosterior',true
when you cross-validate the model using fitcsvm
.
Specify your own function using function handle notation.
Suppose that n
is the number of observations in the
training data (CVMdl.NumObservations
) and
K
is the number of classes
(numel(CVMdl.ClassNames)
). Your function must have the
signature lossvalue =
, where:lossfun
(C,S,W,Cost)
The output argument lossvalue
is a scalar.
You specify the function name
(lossfun
).
C
is an
n
-by-K
logical matrix with rows
indicating the class to which the corresponding observation belongs. The
column order corresponds to the class order in
CVMdl.ClassNames
.
Construct C
by setting C(p,q) =
1
if observation p
is in class
q
, for each row. Set all other elements of row
p
to 0
.
S
is an
n
-by-K
numeric matrix of
classification scores. The column order corresponds to the class order in
CVMdl.ClassNames
. The input S
resembles the output argument score
of kfoldPredict
.
W
is an n
-by-1 numeric vector of
observation weights. If you pass W
, the software
normalizes its elements to sum to 1
.
Cost
is a
K
-by-K
numeric matrix of
misclassification costs. For example, Cost = ones(K) –
eye(K)
specifies a cost of 0
for correct
classification, and 1
for misclassification.
Specify your function using
'LossFun',@
.lossfun
For more details on loss functions, see Classification Loss.
Example: 'LossFun','hinge'
Data Types: char
| string
| function_handle
'Mode'
— Aggregation level for output'average'
(default) | 'individual'
| 'cumulative'
Aggregation level for the output, specified as 'average'
, 'individual'
, or 'cumulative'
.
Value | Description |
---|---|
'average' | The output is a scalar average over all folds. |
'individual' | The output is a vector of length k containing one value per fold, where k is the number of folds. |
'cumulative' | Note If you want to specify this value,
|
Example: 'Mode','individual'
L
— Classification lossClassification loss, returned as a numeric scalar or numeric column vector.
If Mode
is 'average'
, then
L
is the average classification loss over all folds.
If Mode
is 'individual'
, then
L
is a k-by-1 numeric column vector
containing the classification loss for each fold, where k is
the number of folds.
If Mode
is 'cumulative'
and
CVMdl
is
ClassificationPartitionedEnsemble
, then L
is
a min(CVMdl.NumTrainedPerFold)
-by-1 numeric column vector. Each
element j
is the average classification loss over all folds
that the function obtains by using ensembles trained with weak learners
1:j
.
If Mode
is 'cumulative'
and
CVMdl
is ClassificationPartitionedGAM
,
then the output value depends on the IncludeInteractions
value.
If IncludeInteractions
is false
,
then L
is a
(1 + min(NumTrainedPerFold.PredictorTrees))
-by-1
numeric column vector. The first element of L
is the
average classification loss over all folds that is obtained using only the
intercept (constant) term. The (j + 1)
th
element of L
is the average loss obtained using the
intercept term and the first j
predictor trees per linear
term.
If IncludeInteractions
is true
,
then L
is a
(1 + min(NumTrainedPerFold.InteractionTrees))
-by-1
numeric column vector. The first element of L
is the
average classification loss over all folds that is obtained using the
intercept (constant) term and all predictor trees per linear term. The
(j + 1)
th element of L
is the average loss obtained using the intercept term, all predictor trees
per linear term, and the first j
interaction trees per
interaction term.
Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.
Consider the following scenario.
L is the weighted average classification loss.
n is the sample size.
For binary classification:
yj is the observed class
label. The software codes it as –1 or 1, indicating the negative or
positive class (or the first or second class in the
ClassNames
property), respectively.
f(Xj) is the positive-class classification score for observation (row) j of the predictor data X.
mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.
For algorithms that support multiclass classification (that is, K ≥ 3):
yj*
is a vector of K – 1 zeros, with 1 in the
position corresponding to the true, observed class
yj. For example,
if the true class of the second observation is the third class and K = 4, then y2*
= [0 0 1 0]′. The order of the classes corresponds to the order
in the ClassNames
property of the input
model.
f(Xj)
is the length K vector of class scores for
observation j of the predictor data
X. The order of the scores corresponds to the
order of the classes in the ClassNames
property
of the input model.
mj = yj*′f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.
The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,
Given this scenario, the following table describes the supported loss
functions that you can specify by using the 'LossFun'
name-value pair
argument.
Loss Function | Value of LossFun | Equation |
---|---|---|
Binomial deviance | 'binodeviance' | |
Misclassified rate in decimal | 'classiferror' | is the class label corresponding to the class with the maximal score. I{·} is the indicator function. |
Cross-entropy loss | 'crossentropy' |
The weighted cross-entropy loss is where the weights are normalized to sum to n instead of 1. |
Exponential loss | 'exponential' | |
Hinge loss | 'hinge' | |
Logit loss | 'logit' | |
Minimal expected misclassification cost | 'mincost' |
The software computes the weighted minimal expected classification cost using this procedure for observations j = 1,...,n.
The weighted average of the minimal expected misclassification cost loss is If you use the default cost matrix (whose element
value is 0 for correct classification and 1 for incorrect
classification), then the |
Quadratic loss | 'quadratic' |
This figure compares the loss functions (except 'crossentropy'
and
'mincost'
) over the score m for one observation.
Some functions are normalized to pass through the point (0,1).
kfoldLoss
computes the classification loss as described in the
corresponding loss
object function. For a model-specific description, see
the appropriate loss
function reference page in the following
table.
Model Type | loss Function |
---|---|
Discriminant analysis classifier | loss |
Ensemble classifier | loss |
Generalized additive model classifier | loss |
k-nearest neighbor classifier | loss |
Naive Bayes classifier | loss |
Neural network classifier | loss |
Support vector machine classifier | loss |
Binary decision tree for multiclass classification | loss |
Usage notes and limitations:
This function supports k-nearest neighbor and SVM model objects fitted with GPU array input arguments.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
ClassificationPartitionedModel
| kfoldEdge
| kfoldfun
| kfoldMargin
| kfoldPredict
You have a modified version of this example. Do you want to open this example with your edits?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.