Monte Carlo simulation is the process of generating independent,
random draws from a specified probabilistic model. When simulating
time series models, one draw (or realization) is an entire sample
path of specified length *N*, *y*_{1}, *y*_{2},...,*y _{N}*.
When you generate a large number of draws, say

Some extensions of Monte Carlo simulation rely on generating
dependent random draws, such as Markov Chain Monte Carlo (MCMC). The `simulate`

function
in Econometrics
Toolbox™ generates independent realizations.

Some applications of Monte Carlo simulation are:

Demonstrating theoretical results

Forecasting future events

Estimating the probability of future events

Conditional variance models specify the dynamic evolution of the variance of a process over time. Perform Monte Carlo simulation of conditional variance models by:

Specifying any required presample data (or use default presample data).

Generating the next conditional variance recursively using the specified conditional variance model.

Simulating the next innovation from the innovation distribution (Gaussian or Student’s

*t*) using the current conditional variance.

For example, consider a GARCH(1,1) process without a mean offset, $${\epsilon}_{t}={\sigma}_{t}{z}_{t},$$ where *z _{t}* either follows a standardized Gaussian or Student’s

$${\sigma}_{t}^{2}=\kappa +{\gamma}_{1}{\sigma}_{t-1}^{2}+{\alpha}_{1}{\epsilon}_{t-1}^{2}.$$

Suppose that the innovation distribution is Gaussian.

Given presample variance $${\sigma}_{0}^{2}$$ and presample innovation $${\epsilon}_{0},$$ realizations of the conditional variance and innovation process are recursively generated:

$${\sigma}_{1}^{2}=\kappa +{\gamma}_{1}{\sigma}_{0}^{2}+{\alpha}_{1}{\epsilon}_{0}^{2}$$

Sample $${\epsilon}_{1}$$ from a Gaussian distribution with variance $${\sigma}_{1}^{2}$$

$${\sigma}_{2}^{2}=\kappa +{\gamma}_{1}{\sigma}_{1}^{2}+{\alpha}_{1}{\epsilon}_{1}^{2}$$

Sample $${\epsilon}_{2}$$ from a Gaussian distribution with variance $${\sigma}_{2}^{2}$$

$$\vdots $$

$${\sigma}_{N}^{2}=\kappa +{\gamma}_{1}{\sigma}_{N-1}^{2}+{\alpha}_{1}{\epsilon}_{N-1}^{2}$$

Sample $${\epsilon}_{N}$$ from a Gaussian distribution with variance $${\sigma}_{N}^{2}$$

Random draws are generated from EGARCH and GJR models similarly, using the corresponding conditional variance equations.

Using many simulated paths, you can estimate various features
of the model. However, Monte Carlo estimation is based on a finite
number of simulations. Therefore, Monte Carlo estimates are subject
to some amount of error. You can reduce the amount of Monte Carlo
error in your simulation study by increasing the number of sample
paths, *M*, that you generate from your model.

For example, to estimate the probability of a future event:

Generate

*M*sample paths from your model.Estimate the probability of the future event using the sample proportion of the event occurrence across

*M*simulations,$$\widehat{p}=\frac{\#\text{\hspace{0.17em}}\text{\hspace{0.17em}}times\text{\hspace{0.17em}}\text{\hspace{0.17em}}event\text{\hspace{0.17em}}\text{\hspace{0.17em}}occurs\text{\hspace{0.17em}}\text{\hspace{0.17em}}in\text{\hspace{0.17em}}\text{\hspace{0.17em}}M\text{\hspace{0.17em}}\text{\hspace{0.17em}}draws}{M}.$$

Calculate the Monte Carlo standard error for the estimate,

$$se=\sqrt{\frac{\widehat{p}(1-\widehat{p})}{M}}.$$

You can reduce the Monte Carlo error of the probability estimate by increasing the number of realizations. If you know the desired precision of your estimate, you can solve for the number of realizations needed to achieve that level of precision.

- Simulate Conditional Variance Model
- Simulate GARCH Models
- Assess EGARCH Forecast Bias Using Simulations