Model reference adaptive control (MRAC) is a model-based, real-time adaptive control algorithm that computes control actions to make an uncertain controlled system track the behavior of a given reference plant model. Using Simulink® Control Design™ software, you can implement MRAC using the Model Reference Adaptive Control block.

The following figure shows the structure of an MRAC controller.

The goal of the controller is to make the states (x) of the controlled system track the states (xm) of the reference model for a given reference signal (r). For more information on the controlled system and reference models, see Nominal and Reference Models. The MRAC controller uses the following process to generate a control signal.

1. Compute the error e between the states of the controlled system and the states of the reference model.

2. Compute the features ϕ(x) for the internal model of the system disturbances and uncertainty.

3. Update the weight parameters of the disturbance model based on the error e.

4. Update the values of the feedback and feedforward gains based on error e. You can choose to have the controller adapt the feedforward gains, the feedback gains, or both.

5. Apply the updated gains and parameters to the reference signal, plant states, and disturbance model features.

6. Compute the control action u. For more information, see Control Structure.

### Nominal and Reference Models

The controlled system, which typically exhibits modeling uncertainty and external disturbances, has the following nominal state equation. This model represents the nominal behavior that you expect from the plant.

`$\stackrel{˙}{x}\left(t\right)=Ax\left(t\right)+B\left(u\left(t\right)+f\left(x\right)\right)$`

Here:

• x(t) is the state of the system you want to control.

• u(t) is the control input.

• A is a constant state-transition matrix.

• B is a constant control effective matrix.

• f(x) is the matched uncertainty in the system.

The reference plant model is the ideal system that characterizes the desired behavior that you want to achieve in practice.

`${\stackrel{˙}{x}}_{m}\left(t\right)={A}_{m}{x}_{m}\left(t\right)+{B}_{m}r\left(t\right)$`

Here:

• r(t) is the external reference signal

• xm(t) is the state of the reference plant model. Since r(t) is known, you can simulate the reference model to get xm(t).

• Am is a constant state matrix. For a stable reference model, Am must be a Hurwitz matrix for which every eigenvalue must have a strictly negative real part.

• Bm is a control effective matrix.

### Control Structure

MRAC computes the control input u(t) using the following control structure.

`$\begin{array}{l}u\left(t\right)={k}_{x}x\left(t\right)+{k}_{r}r\left(t\right)-{u}_{ad}\\ {u}_{ad}={w}^{T}\varphi \left(x\right)\end{array}$`

Here:

• kx is the feedback gain matrix.

• kr is the feedforward gain matrix.

• uad is the adaptive control component derived from the disturbance model.

• ϕ(x) contains the disturbance and uncertainty model features. For more information on the model features, see Disturbance and Uncertainty Model Features.

• w is an adaptive control weight vector.

An MRAC controller can learn the values of kx, kr, and w in real time.

When you substitute the control structure into the nominal model and simplify the result, you get the following state equation.

`$\stackrel{˙}{x}\left(t\right)=\left(A+B{k}_{x}\right)x\left(t\right)+B{k}_{r}r\left(t\right)+B\left(f\left(x\right)-{w}^{T}\varphi \left(x\right)\right)$`

### Uncertainty and Error Dynamics

Using the universal approximation property of neural networks, you can assume that an ideal neural network with unknown ideal weights can parameterize the true uncertainty.

`$f\left(x\right)={w}^{*T}\varphi \left(x\right)+\epsilon$`

Here, w* contains the unknown ideal weights and ε is the approximation error.

Further, if you define the weight vector error as $\stackrel{˜}{w}={w}^{*}-w$ and substitute it into the control structure, you get the following state equation.

`$\stackrel{˙}{x}\left(t\right)=\left(A+B{k}_{x}\right)x\left(t\right)+B{k}_{r}r\left(t\right)+B\left({\stackrel{˜}{w}}^{T}\varphi \left(x\right)+\epsilon \right)$`

The goal of the MRAC controller is the asymptotic convergence of the state tracking error e(t) to zero.

`$\begin{array}{l}e\left(t\right)=x\left(t\right)-{x}_{m}\left(t\right)\\ \stackrel{˙}{e}\left(t\right)=\stackrel{˙}{x}\left(t\right)-{\stackrel{˙}{x}}_{m}\left(t\right)\end{array}$`

Substitute the control structure and reference model equations to obtain the following equation for the error dynamics.

`$\stackrel{˙}{e}\left(t\right)=\left(A+B{k}_{x}\right)x\left(t\right)+B{k}_{r}r\left(t\right)+B\left({\stackrel{˜}{w}}^{T}\varphi \left(x\right)+\epsilon \right)-{A}_{m}{x}_{m}\left(t\right)-{B}_{m}r\left(t\right)$`

You can rewrite this equation as follows.

`$\begin{array}{l}\stackrel{˙}{e}\left(t\right)={A}_{m}e\left(t\right)+B{\stackrel{˜}{k}}_{x}x\left(t\right)+B{\stackrel{˜}{k}}_{r}r\left(t\right)+B\left({\stackrel{˜}{w}}^{T}\varphi \left(x\right)+\epsilon \right)\\ {\stackrel{˜}{k}}_{x}={k}_{x}^{*}-{k}_{x}\\ {\stackrel{˜}{k}}_{r}={k}_{r}^{*}-{k}_{r}\end{array}$`

Here, ${\stackrel{˜}{k}}_{x}$ and ${\stackrel{˜}{k}}_{r}$ are feedback and feedforward gain errors, respectively. ${k}_{x}^{*}$ and ${k}_{r}^{*}$ are unknown ideal feedback and feedforward gains.

To derive the MRAC parameter update equations, first define the following Lyapunov function based on the error dynamics.

`$V\left(e,{\stackrel{˜}{k}}_{x},{\stackrel{˜}{k}}_{r},\stackrel{˜}{w}\right)={e}^{T}Qe+\frac{{\Gamma }_{x}{\stackrel{˜}{k}}_{x}^{2}}{2}+\frac{{\Gamma }_{r}{\stackrel{˜}{k}}_{r}^{2}}{2}+\frac{{\stackrel{˜}{w}}^{T}{\Gamma }_{w}\stackrel{˜}{w}}{2}$`

Here, Γx, Γr, and Γw are learning rates for the parameter updates. Q is a weighting matrix for state tracking errors.

To obtain the following parameter update equations, take the Lie derivative of the preceding Lyapunov function along the error dynamics and equate it to zero.

`$\begin{array}{l}{\stackrel{˙}{k}}_{x}={\Gamma }_{x}x\left(t\right){e}^{T}\left(t\right)PB\\ {\stackrel{˙}{k}}_{r}={\Gamma }_{r}x\left(t\right){e}^{T}\left(t\right)PB\\ {\stackrel{˙}{w}}_{x}={\Gamma }_{w}\varphi \left(t\right){e}^{T}\left(t\right)PB\end{array}$`

Here, P is the solution to the following Lyapunov function based on the reference model state matrix.

`${A}_{m}^{T}P+P{A}_{m}+Q=0$`

### Learning Modification

To add robustness at higher learning rates, you can modify the parameter updates to include an optional momentum term. You can choose one of two possible learning modification methods: sigma modification and e-modification.

For sigma modification, the momentum term for each parameter update is the product of the momentum weight parameter σ and the current parameter value.

`$\begin{array}{l}{\stackrel{˙}{k}}_{x}={\Gamma }_{x}x\left(t\right){e}^{T}\left(t\right)PB+\sigma {k}_{x}\\ {\stackrel{˙}{k}}_{r}={\Gamma }_{r}x\left(t\right){e}^{T}\left(t\right)PB+\sigma {k}_{x}\\ {\stackrel{˙}{w}}_{x}={\Gamma }_{w}\varphi \left(t\right){e}^{T}\left(t\right)PB+\sigma w\end{array}$`

For e-modification, the controller scales the sigma-modification momentum term by the norm of the error vector.

`$\begin{array}{l}{\stackrel{˙}{k}}_{x}={\Gamma }_{x}x\left(t\right){e}^{T}\left(t\right)PB+\sigma |e\left(t\right)|{k}_{x}\\ {\stackrel{˙}{k}}_{r}={\Gamma }_{r}x\left(t\right){e}^{T}\left(t\right)PB+\sigma |e\left(t\right)|{k}_{x}\\ {\stackrel{˙}{w}}_{x}={\Gamma }_{w}\varphi \left(t\right){e}^{T}\left(t\right)PB+\sigma |e\left(t\right)|w\end{array}$`

To adjust the amount of the learning modification for either method, change the value of the momentum weight parameter σ.

### Disturbance and Uncertainty Model Features

The Model Reference Adaptive Control block maintains an internal model uad of the disturbance and model uncertainty in the controlled system.

`${u}_{ad}={w}^{T}\varphi \left(x\right)$`

Here, ϕ(x) is a vector of model features. w is an adaptive control weight vector that the controller updates in real time.

To define ϕ(x), you can use one of the following feature definitions.

• State vector of the controlled plant — This approach can under-represent the uncertainty and perform poorly. Use this option when the disturbance and model uncertainty are linear. Using the state vector can also be a useful starting point when you do not know the complexity of the disturbance and model uncertainty.

• Gaussian radial basis functions — Use this option when the disturbance and model uncertainty are nonlinear and structure of the disturbance model is unknown.

• External source provided to the controller block — Use this option to define your own custom feature vector. You can use this option when you know the structure of the disturbance and uncertainty model. For example, you can use a custom feature vector to identify specific unknown plant parameters.