FMINLBFGS is a Memory efficient optimizer for problems such as image registration with large amounts of unknowns, and cpuexpensive gradients.
Supported:
 Quasi Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS).
 Limited memory BFGS (LBFGS).
 Steepest Gradient Descent optimization.
Advantages:
 QuasiNewton thus no need to provide a hessian, instead the hessian is updated by analyzing successive gradient vectors instead.
 LBFGS never explicitly forms or stores the Hessian matrix, which can be quite expensive when the number of dimensions becomes large.
 There is an input option to replace gradient calls during linesearch with normal function calls, if the gradient is cpuexpensive.
This function uses the same interface and options as Matlab functions such as fminunc and fminsearch. But optimset can be replaced by a simple struct when optimization toolbox not available.
Example:
options = optimset('GradObj', 'on' ,'Display', 'iter', 'HessUpdate', 'bfgs', 'GoalsExactAchieve',1);
[x,fval] = fminlbfgs(@myfun, [1 1 1 1 1], options);
DirkJan Kroon (2020). FMINLBFGS: Fast Limited Memory Optimizer (https://www.mathworks.com/matlabcentral/fileexchange/23245fminlbfgsfastlimitedmemoryoptimizer), MATLAB Central File Exchange. Retrieved .
1.8.0.0  Fixed Example and out of memory issues. 

1.7.0.0  output.fval is now also set in case the initial position is already the minimum 

1.5.0.0  Added option to visualize line search. Lless iterations used in simple line search. Solved imag. warning. 

1.4.0.0  Small NaN error solved... 

1.3.0.0  Help extended and all parameters are now available as option 

1.1.0.0  Fixed an "oscillation" bug in simplelinesearch which could occur with a nonsmooth error function. 
Create scripts with code, output, and formatted text in a single executable document.
Excellent work, thank you very much
Hope they can add it to buildin function in next matlab version.
Hi DirkJan,
After lots of life happened, I now *finally* picked up work again to get my OPTIMIZE() to officially support your optimizer. So, getting back to that collaboration on OPTIMIZE() we were discussing 4 years ago... :)
Here's a small list of issues I encountered:
 when optim.Display is empty ([]), it is NOT the same as being 'off'. It results in at least the display of the final results. As people will probably be creating "optim" with optimset(), we should expect empties, and interpret them as 'off'.
 The output structure contains two fields with a slightly different name from those of FMINSEARCH:
 you: output.funccount FMINSEARCH: output.funcCount (capital)
 you: output.iteration FMINSEARCH: output.iterations ('s' at the end)
 Your optim.GradConstr has a different meaning than in the optimization toolbox: in FMINCON for example, this setting means that you provide gradients of the nonlinear constraint functions. As I need that setting, I propose a name change for your setting: optim.AvoidGrad (or something similar)
That's it. As I said, it's really basic stuff. If you could update FMINLBFGS() with these little things, I'd be very grateful.
Best regards,
Rody Oldenhuis
hi, as a student, I try to use the fminlbfgs to estimate some parameters, but I find the exit message is " line search cannot find an acceptable point along the current search". Does that mean the results are wrong? how should i do to improve the results? Thanks a lot. hope for your reply.
Thanks for providing this useful piece of numerical optimization code. I implemented it for fitting tomographic data to a model. I found that the step sizes for computing the gradient (DiffMinChange) seemed quite crucial for the convergence. Since my set of variables for optimization has different domains (e.g. [0...0.5] or [0....2*PI] ...), I had to modify the code so to account for different DiffMin/MaxChanges.
BTW, is there a fast MEXversion of the (L)BFGS optimizer?
Hi, as a beginner, I have some problems under the condition that there are two known variables, let's say that the objective function is f =sin(x)+cos(y) and the gradient function is obviously g = cos(x)sin(y), how to modify the example.m, since I have no idea that how to assign the initial values, when I set the values of x and y all equal to 1, the result suggests that the dimensions is not matched. Can anybody help me to modify myfun.m and example.m to give the right answer? Thank you so much.
@marco: the standard way to do this is with anonymous functions. e.g. if your function "myfun" accepts arguments (xo, addarg1, addarg2), then define a new function
newfun = @(x) myfun( x, addarg1, addarg2 );
Stephen
Hi. I would like to add in the optimization additional external arguments that are needed by the 'myfun', e.g.:
[RegistrationParams] = fminlbgs( @myfun, x0 , options, addarg1, addarg2, ...)
Is it possible with this function? When I try, I get 'Too many input arguments' error.
Thanks
*Chris Barber,
Constraints on the variables are not a buildin feature.
But constraining your problem can be done by adding some function to your cost function like:
C = C + abs(log(max(xa,eps)))*1e5
In this way x will never go lower then the constant 'a'.
You can also have a look at the constraining code of 'Rody Oldenhuis' (Comment above)
Fantastic code! Just wondering, is there a way to put constraints on the variables?
Thanx Atiyah, I fixed the example in the the newest version
I found the mistake. The small example that comes with the source code is the cause. The objective function is supposed to return a scaler. In the example. @objfunc returns a vector. I changed it and it works fine now. Overall thanks for the contribution.
Such software is very useful for timeconsuming optimization problems. Unfortunately I am not able to use it. Running the script example.m produces a lot of errors. I tried to fix the errors one after another but there are a lot of errors. There are a lot of comparison operations between scalers and vectors (eg. s < v). Also empty index vector (eg. x(ind)). I stopped fixing it because I don't know what is the right thing to do ?
I still have the same problem of the function returning empty variable...
I encountered an occasion which potentially indicates bug. When the initial parameter sets gives the minimum value of the optimization function, the return values (at least x and fval) of fminlbfgs are empty, which could halt applications utilizing this subfunction. I'm not familiar with the internal process of this subfunction and could not figure out which line needs to be modified, but from the observed inputoutput relationship, I guess zero repetition results in empty output.
If you have references for this optimization methods, would you mind letting us know?
Rody Oldenhuis, thank you for your comment and compliments.
I have no experience with (in)equality constrained optimization, or are an expert on optimization, I just needed a good optimizer for my image registration.
Your OPTIMIZE() wrapper must be very good, because John D'Errico says well done ;).
Please explain which options or other things have to be changed to let FMINLBFGS work with your function, then I will try to change those things in FMINLBFGS, and link in the description to your code, because people like Sergio Rossi here, need constrains.
In my image registration problems I always adjust my deltastep for finitedifferences to be in the order of 1% of the current linesearch step length. Those derivatives are more valid for larger steps lengths than with infinite small deltasteps, especially in case of noise or small oscillating functions. Maybe this is also the reason in your case.
I found a minor bug:
 when the maximum amount of function evaluations have been reached, the output message is 'Algorithm was terminated by the output function.'
Just a general question: I find that when passing explicit derivatives in stead of using the centraldifference ones, FMINLBFGS()'s basin of attraction seems to shrink significantly. The endresults are more accurate, and my derivatives are good (I checked them on paper, and double checked them with centraldifferences), which hints at that they DO work, but convergence is much harder to attain. Is there something I'm missing?
Also  I'm currently adjusting my own OPTIMIZE() wapper function (see my profile) to do (non)linear (in)equality constrained optimization with FMINLBGFS(). To that end, I want to ask you to make a few changes. For example, my OPTIMIZE() must of course get the gradients of the constraint functions, which is normally set through the option field GradConstr. However, you use that option in a different way; to constrain CPUusage on calculations. Also, you use output.funccount in stead of FMINSEARCH's output.funcCount (capital C), and many more minor things.
Anyway, if you're interested in collaboration on OPTIMIZE, let me know. Otherwise, I'll just change these little things myself and post it with the next update of OPTIMIZE() (giving you the credit of course).
But, aside from those minor things, a great function! Well commented and documented, intuitive variable names and in all...very readable. Good work!
This code is excellent: optimizations with around 100 parameters which were impossible using fminsearch on my machine, and very slow even on a dedicated server, are now a matter of minutes on the server. Thank you, Dirk, for the brilliant work. If anybody wrote a wrapper function for this which implemented linear/nonlinear constraints, it would become a stateoftheart optimizator.
Brilliant! Excellent Work!