IsolationForest
Description
Use an isolation forest (ensemble of
isolation trees) model object IsolationForest
for outlier detection and
novelty detection.
Outlier detection (detecting anomalies in training data) — Detect anomalies in training data by using the
iforest
function. Theiforest
function builds anIsolationForest
object and returns anomaly indicators and scores for the training data.Novelty detection (detecting anomalies in new data with uncontaminated training data) — Create an
IsolationForest
object by passing uncontaminated training data (data with no outliers) toiforest
, and detect anomalies in new data by passing the object and the new data to the object functionisanomaly
. Theisanomaly
function returns anomaly indicators and scores for the new data.
Creation
Create an IsolationForest
object by using the iforest
function.
Properties
CategoricalPredictors
— Categorical predictor indices
vector of positive integers | []
This property is read-only.
Categorical predictor
indices, specified as a vector of positive integers. CategoricalPredictors
contains index values indicating that the corresponding predictors are categorical. The index
values are between 1 and p
, where p
is the number of
predictors used to train the model. If none of the predictors are categorical, then this
property is empty ([]
).
ContaminationFraction
— Fraction of anomalies in training data
numeric scalar in the range [0,1]
This property is read-only.
Fraction of anomalies in the training data, specified as a numeric scalar in the
range [0,1]
.
If the
ContaminationFraction
value is 0, theniforest
treats all training observations as normal observations, and sets the score threshold (ScoreThreshold
property value) to the maximum anomaly score value of the training data.If the
ContaminationFraction
value is in the range (0
,1
], theniforest
determines the threshold value (ScoreThreshold
property value) so that the function detects the specified fraction of training observations as anomalies.
NumLearners
— Number of isolation trees
positive integer scalar
This property is read-only.
Number of isolation trees, specified as a positive integer scalar.
NumObservationsPerLearner
— Number of observations for each isolation tree
positive integer scalar
This property is read-only.
Number of observations to draw from the training data without replacement for each isolation tree, specified as a positive integer scalar.
PredictorNames
— Predictor variable names
cell array of character vectors
This property is read-only.
Predictor variable names, specified as a cell array of character vectors. The order of the
elements in PredictorNames
corresponds to the order in which the
predictor names appear in the training data.
ScoreThreshold
— Threshold for anomaly score
numeric scalar in the range [0,1]
This property is read-only.
Threshold for the anomaly score used to identify anomalies in the training data,
specified as a numeric scalar in the range [0,1]
.
The software identifies observations with anomaly scores above the threshold as anomalies.
The
iforest
function determines the threshold value to detect the specified fraction (ContaminationFraction
property) of training observations as anomalies.
The
isanomaly
object function uses theScoreThreshold
property value as the default value of theScoreThreshold
name-value argument.
Object Functions
isanomaly | Find anomalies in data using isolation forest |
Examples
Detect Outliers
Detect outliers (anomalies in training data) by using the iforest
function.
Load the sample data set NYCHousing2015
.
load NYCHousing2015
The data set includes 10 variables with information on the sales of properties in New York City in 2015. Display a summary of the data set.
summary(NYCHousing2015)
NYCHousing2015: 91446x10 table Variables: BOROUGH: double NEIGHBORHOOD: cell array of character vectors BUILDINGCLASSCATEGORY: cell array of character vectors RESIDENTIALUNITS: double COMMERCIALUNITS: double LANDSQUAREFEET: double GROSSSQUAREFEET: double YEARBUILT: double SALEPRICE: double SALEDATE: datetime Statistics for applicable variables: NumMissing Min Median Max Mean Std BOROUGH 0 1 3 5 2.8431 1.3343 NEIGHBORHOOD 0 BUILDINGCLASSCATEGORY 0 RESIDENTIALUNITS 0 0 1 8759 2.1789 32.2738 COMMERCIALUNITS 0 0 0 612 0.2201 3.2991 LANDSQUAREFEET 0 0 1700 29305534 2.8752e+03 1.0118e+05 GROSSSQUAREFEET 0 0 1056 8942176 4.6598e+03 4.3098e+04 YEARBUILT 0 0 1939 2016 1.7951e+03 526.9998 SALEPRICE 0 0 333333 4.1111e+09 1.2364e+06 2.0130e+07 SALEDATE 0 01-Jan-2015 09-Jul-2015 31-Dec-2015 07-Jul-2015 2470:47:17
The SALEDATE
column is a datetime
array, which is not supported by iforest
. Create columns for the month and day numbers of the datetime
values, and delete the SALEDATE
column.
[~,NYCHousing2015.MM,NYCHousing2015.DD] = ymd(NYCHousing2015.SALEDATE); NYCHousing2015.SALEDATE = [];
The columns BOROUGH
, NEIGHBORHOOD
, and BUILDINGCLASSCATEGORY
contain categorical predictors. Display the number of categories for the categorical predictors.
length(unique(NYCHousing2015.BOROUGH))
ans = 5
length(unique(NYCHousing2015.NEIGHBORHOOD))
ans = 254
length(unique(NYCHousing2015.BUILDINGCLASSCATEGORY))
ans = 48
For a categorical variable with more than 64 categories, the iforest
function uses an approximate splitting method that can reduce the accuracy of the isolation forest model. Remove the NEIGHBORHOOD
column, which contains a categorical variable with 254 categories.
NYCHousing2015.NEIGHBORHOOD = [];
Train an isolation forest model for NYCHousing2015
. Specify the fraction of anomalies in the training observations as 0.1, and specify the first variable (BOROUGH
) as a categorical predictor. The first variable is a numeric array, so iforest
assumes it is a continuous variable unless you specify the variable as a categorical variable.
rng("default") % For reproducibility [Mdl,tf,scores] = iforest(NYCHousing2015,ContaminationFraction=0.1, ... CategoricalPredictors=1);
Mdl
is an IsolationForest
object. iforest
also returns the anomaly indicators (tf
) and anomaly scores (scores
) for the training data NYCHousing2015
.
Plot a histogram of the score values. Create a vertical line at the score threshold corresponding to the specified fraction.
histogram(scores) xline(Mdl.ScoreThreshold,"r-",["Threshold" Mdl.ScoreThreshold])
If you want to identify anomalies with a different contamination fraction (for example, 0.01), you can train a new isolation forest model.
rng("default") % For reproducibility [newMdl,newtf,scores] = iforest(NYCHousing2015, ... ContaminationFraction=0.01,CategoricalPredictors=1);
If you want to identify anomalies with a different score threshold value (for example, 0.65), you can pass the IsolationForest
object, the training data, and a new threshold value to the isanomaly
function.
[newtf,scores] = isanomaly(Mdl,NYCHousing2015,ScoreThreshold=0.65);
Note that changing the contamination fraction or score threshold changes the anomaly indicators only, and does not affect the anomaly scores. Therefore, if you do not want to compute the anomaly scores again by using iforest
or isanomaly
, you can obtain a new anomaly indicator with the existing score values.
Change the fraction of anomalies in the training data to 0.01.
newContaminationFraction = 0.01;
Find a new score threshold by using the quantile
function.
newScoreThreshold = quantile(scores,1-newContaminationFraction)
newScoreThreshold = 0.7045
Obtain a new anomaly indicator.
newtf = scores > newScoreThreshold;
Detect Novelties
Create an IsolationForest
object for uncontaminated training observations by using the iforest
function. Then detect novelties (anomalies in new data) by passing the object and the new data to the object function isanomaly
.
Load the 1994 census data stored in census1994.mat
. The data set consists of demographic data from the US Census Bureau to predict whether an individual makes over $50,000 per year.
load census1994
census1994
contains the training data set adultdata
and the test data set adulttest
.
Train an isolation forest model for adultdata
. Assume that adultdata
does not contain outliers.
rng("default") % For reproducibility [Mdl,tf,s] = iforest(adultdata);
Mdl
is an IsolationForest
object. iforest
also returns the anomaly indicators tf
and anomaly scores s
for the training data adultdata
. If you do not specify the ContaminationFraction
name-value argument as a value greater than 0, then iforest
treats all training observations as normal observations, meaning all the values in tf
are logical 0 (false
). The function sets the score threshold to the maximum score value. Display the threshold value.
Mdl.ScoreThreshold
ans = 0.8600
Find anomalies in adulttest
by using the trained isolation forest model.
[tf_test,s_test] = isanomaly(Mdl,adulttest);
The isanomaly
function returns the anomaly indicators tf_test
and scores s_test
for adulttest
. By default, isanomaly
identifies observations with scores above the threshold (Mdl.ScoreThreshold
) as anomalies.
Create histograms for the anomaly scores s
and s_test
. Create a vertical line at the threshold of the anomaly scores.
histogram(s,Normalization="probability") hold on histogram(s_test,Normalization="probability") xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold])) legend("Training Data","Test Data",Location="northwest") hold off
Display the observation index of the anomalies in the test data.
find(tf_test)
ans = 15655
The anomaly score distribution of the test data is similar to that of the training data, so isanomaly
detects a small number of anomalies in the test data with the default threshold value. You can specify a different threshold value by using the ScoreThreshold
name-value argument. For an example, see Specify Anomaly Score Threshold.
More About
Isolation Forest
The isolation forest algorithm [1] detects anomalies by isolating anomalies from normal points using an ensemble of isolation trees.
The iforest
function creates an isolation forest model (ensemble of
isolation trees) for training observations and detects outliers (anomalies in the training
data). Each isolation tree is trained for a subset of training observations as follows:
iforest
draws samples without replacement from the training observations for each tree.iforest
grows a tree by choosing a split variable and split position uniformly at random. The function continues until every sample reaches a separate leaf node for each tree.
This algorithm assumes the data has only a few anomalies and they are different from
normal points. Therefore, an anomaly reaches a separate leaf node closer to the root node
and has a shorter path length (the distance from the root node to the leaf node) than normal
points. The iforest
function identifies outliers using anomaly scores that are defined
based on the average path lengths over all isolation trees.
The isanomaly
function uses a trained isolation forest model to detect
anomalies in the data. For novelty detection (detecting anomalies in new data with
uncontaminated training data), you can train an isolation forest model with uncontaminated
training data (data with no outliers) and use it to detect anomalies in new data. For each
observation of the new data, the function finds the corresponding leaf node in each tree,
finds the average path length to reach a leaf node from the root node in the trained
isolation forest model, and returns an anomaly indicator and score.
For more details, see Anomaly Detection with Isolation Forest.
Anomaly Scores
The isolation forest algorithm computes the anomaly score s(x) of an observation x by normalizing the path length h(x):
where E[h(x)] is the average path length over all isolation trees in the isolation forest, and c(n) is the average path length of unsuccessful searches in a binary search tree of n observations.
The score approaches 1 as E[h(x)] approaches 0. Therefore, a score value close to 1 indicates an anomaly.
The score approaches 0 as E[h(x)] approaches n – 1. Also, the score approaches 0.5 when E[h(x)] approaches c(n). Therefore, a score value smaller than 0.5 and close to 0 indicates a normal point.
Tips
You can use interpretability features, such as
lime
,shapley
,partialDependence
, andplotPartialDependence
, to interpret how predictors contribute to anomaly scores. Define a custom function that returns anomaly scores, and then pass the custom function to the interpretability functions. For an example, see Specify Model Using Function Handle.
References
[1] Liu, F. T., K. M. Ting, and Z. Zhou. "Isolation Forest," 2008 Eighth IEEE International Conference on Data Mining. Pisa, Italy, 2008, pp. 413-422.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
The
isanomaly
function supports code generation.
For more information, see Introduction to Code Generation.
Version History
Introduced in R2021b
See Also
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)