# Unconstrained Minimization Using `fminunc`

This example shows how to use `fminunc` to solve the nonlinear minimization problem

`$\underset{x}{\mathrm{min}}f\left(x\right)={e}^{{x}_{1}}\left(4{x}_{1}^{2}+2{x}_{2}^{2}+4{x}_{1}{x}_{2}+2{x}_{2}+1\right).$`

To solve this two-dimensional problem, write a function that returns $f\left(x\right)$. Then, invoke the unconstrained minimization routine `fminunc` starting from the initial point `x0 = [-1,1]`.

The helper function `objfun` at the end of this example calculates $f\left(x\right)$.

To find the minimum of $f\left(x\right)$, set the initial point and call `fminunc`.

```x0 = [-1,1]; [x,fval,exitflag,output] = fminunc(@objfun,x0);```
```Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance. ```

View the results, including the first-order optimality measure in the `output` structure.

`disp(x)`
``` 0.5000 -1.0000 ```
`disp(fval)`
``` 3.6609e-15 ```
`disp(exitflag)`
``` 1 ```
`disp(output.firstorderopt)`
``` 1.2284e-07 ```

The `exitflag` output indicates whether the algorithm converges. `exitflag` = 1 means `fminunc` finds a local minimum.

The `output` structure gives more details about the optimization. For `fminunc`, the structure includes:

• `output.iterations`, the number of iterations

• `output.funcCount`, the number of function evaluations

• `output.stepsize`, the final step-size

• `output.firstorderopt`, a measure of first-order optimality (which, in this unconstrained case, is the infinity norm of the gradient at the solution)

• `output.algorithm`, the type of algorithm used

• `output.message`, the reason the algorithm stopped

### Helper Function

This code creates the `objfun` helper function.

```function f = objfun(x) f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1); end```