# Setup an optimization problem using Bayesian Optimization

8 views (last 30 days)
Tessa Kol on 6 Sep 2020
Commented: Matt J on 7 Sep 2020
Dear all,
Problem:I try to find the optimal set of variables using Bayesian optimization.
Before I go further into the depth, let me explain the background of what I am trying to achieve here.
I first did some physical experiments. These physical experiments give me certain results (i.e. objectives). For example I measured a total mass of 27 kg. I measured other results as well.
Then, in a simulation I try to recreate the results of the physical experiments. In those simulations I define the range and interval of the variables I try to optimize. The variables I try to optimize are mu_s (ranging from 0.1 to 0.5 with an interval of 0.1) and mu_r (ranging from 0.1 to 0.5 with an interval of 0.1).
The simulations will give me predicted results for certain combinations of variables. For example:
mu_s = 0.1 and mu_r = 0.7 will give me a predicted total mass of 29.45 kg.
Then I want to use Bayesian optimization to find the optimal set of variables that can predict the results of the physical experiment as close as possible using the data I have available from the simulations.
With the documentation from the links below I tried to make a Bayesian optimization, but I am kind of stuck on how to define the objective function.
This is how far I got
optimVars = [
optimizableVariable('\mu_s',[0.1 0.5])
optimizableVariable('\mu_r',[0.1 0.5])];
objFcn = makeObjFcn(
Something to point out:
It is a single-objective problem. The objective function is defined as the total relative discrepancies between the simulation outputs and the experimental measurements using weighted sum. The weights of error components are equally contributing to the total simulation error. Thus, the weights of each term are set to be the same.
To search for new points to be evaluated next by the objective function, I want to use a combined acquisition function. First, we look for a point that maximizes Expected Improvement (EI), then for a point that maximizes Probability of Improvement. Thus, two new points are determined per iteration, which can then be evaluated in parallel. I think by using 'UseParallel' this can be achieved.
I looked at the global optimization tool under de section surrogate modelling, but I think that is not quite what I want.

Matt J on 6 Sep 2020
Edited: Matt J on 6 Sep 2020
The functions you have been looking at (e.g. bayesopt) don't appear to be general model fitting solvers. They look like they are designed very specifically for training machine learning systems.
I think you might be able to achieve your ultimate goals by using either lsqnonlin,
or lsqcurvefit (which does a lot more than just curve fitting),
even if they do not implement precisely the search algorithm you have described.
One thing I would note - you will find that Matlab documentation only applies the terminology "multi-objective" to problems where you are trying to solve not just for a single solution, but a Pareto front. That doesn't seem to be the case for you, but if it is, have a look here:
Matt J on 7 Sep 2020
The GitHub version seems to contain better usage instructions

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!