## Linear Matrix Inequalities

Linear Matrix Inequalities (LMIs) and LMI techniques have emerged as powerful design tools in areas ranging from control engineering to system identification and structural design. Three factors make LMI techniques appealing:

A variety of design specifications and constraints can be expressed as LMIs.

Once formulated in terms of LMIs, a problem can be solved

*exactly*by efficient convex optimization algorithms (see LMI Solvers).While most problems with multiple constraints or objectives lack analytical solutions in terms of matrix equations, they often remain tractable in the LMI framework. This makes LMI-based design a valuable alternative to classical “analytical” methods.

See [9] for a good introduction to LMI concepts. Robust Control Toolbox™ software is designed as an easy and progressive gateway to the new and fast-growing field of LMIs:

For users who occasionally need to solve LMI problems, the LMI Editor and the tutorial introduction to LMI concepts and LMI solvers provide for quick and easy problem solving.

For more experienced LMI users, LMI Lab, offers a rich, flexible, and fully programmable environment to develop customized LMI-based tools.

### LMI Features

Robust Control Toolbox LMI functionality serves two purposes:

Provide state-of-the-art tools for the LMI-based analysis and design of robust control systems

Offer a flexible and user-friendly environment to specify and solve general LMI problems (the LMI Lab)

Examples of LMI-based analysis and design tools include

For users interested in developing their own applications, the LMI Lab provides a general-purpose and fully programmable environment to specify and solve virtually any LMI problem. Note that the scope of this facility is by no means restricted to control-oriented applications.

**Note**

Robust Control Toolbox software implements state-of-the-art interior-point LMI solvers. While these
solvers are significantly faster than classical convex optimization algorithms, you should keep
in mind that the complexity of LMI computations can grow quickly with the problem order (number
of states). For example, the number of operations required to solve a Riccati equation is
o(n^{3}) where *n* is the state dimension, while
the cost of solving an equivalent “Riccati inequality” LMI is
o(n^{6}).

### LMIs and LMI Problems

*A* linear matrix inequality (LMI) is any constraint of the form

A(x) := A_{0} + x_{1}A_{1} + ... + x < 0_{N}A_{N} | (1) |

where

*x*= (*x*_{1}, . . . ,*x*) is a vector of unknown scalars (the_{N}*decision*or*optimization*variables)*A*_{0}, . . . ,*A*are given_{N}*symmetric*matrices< 0 stands for “negative definite,” i.e., the largest eigenvalue of

*A*(*x*) is negative

Note that the constraints *A*(*x*) > 0 and
*A*(*x*) <
*B*(*x*) are special cases of Equation 1 since they can be rewritten as
–*A*(*x*) < 0 and
*A*(*x*)* – B*(*x*)
< 0, respectively.

The LMI of Equation 1 is a convex constraint on *x* since
*A*(*y*) < 0 and
*A*(*z*) < 0 imply that $$A\left(\frac{y+z}{2}\right)<0$$. As a result,

Its solution set, called the

*feasible set*, is a convex subset of*R*^{N}Finding a solution

*x*to Equation 1, if any, is a convex optimization problem.

Convexity has an important consequence: even though Equation 1 has no analytical solution in general, it can be solved numerically with guarantees of finding a solution when one exists. Note that a system of LMI constraints can be regarded as a single LMI since

$$\{\begin{array}{c}{A}_{1}\left(x\right)<0\\ \vdots \\ {A}_{K}\left(x\right)<0\end{array}$$

is equivalent to

$$A\left(x\right):=\text{diag}\left({\text{A}}_{\text{1}}\left(x\right),\dots ,{\text{A}}_{\text{K}}\left(x\right)\right)<0$$

where diag (*A*_{1}(*x*), . . . ,
*A*_{K}(*x*)) denotes the
block-diagonal matrix with

*A*_{1}(*x*), . . .
, *A*_{K}(*x*) on its diagonal. Hence
multiple LMI constraints can be imposed on the vector of decision variables
*x* without destroying convexity.

In most control applications, LMIs do not naturally arise in the canonical form of Equation 1 , but rather in the form

*L*(*X*_{1}, . . . ,
*X*_{n}) <
*R*(*X*_{1}, . . . ,
*X*_{n})

where *L*(.) and *R*(.) are affine functions of some
structured *matrix* variables *X*_{1},
. . . , *X*_{n}. *A* simple example is
the Lyapunov inequality

A + ^{T}XXA < 0 | (2) |

where the unknown *X* is a symmetric matrix. Defining
*x*_{1}, . . . ,
*x*_{N} as the independent scalar entries of
*X*, this LMI could be rewritten in the form of Equation 1. Yet it is more convenient and efficient to describe it in its
natural form Equation 2, which is the approach taken in the LMI Lab.