curve fitting with fminsearch

Hello all,
I'm trying to learn MATLAB and take a course for that, and i have a homework that i can't solve. I have experimental datas for x and y variables(total 11 each) and the question asks me to fit the datas by using 'fminsearch' .
Can anyone help me how can i find the best curve and write a proper code? Thank you
x=[3, 5, 7, 10, 13, 17, 20, 23, 25, 29, 31];
y=[1.1 , 2.0 , 3.7 , 9.0 , 22.2 , 73.8 , 181.5,446.5 , 813.6 , 2701.3 , 4922.1];

2 Comments

Ameer Hamza
Ameer Hamza on 18 May 2020
Edited: Ameer Hamza on 18 May 2020
Can you show what you have already tried? Even if you don't have a code, can you write down your understanding about solving this problem?
Well, i've tried to write those, but i know it's false.
In the question, x vs y datas are going to be fitted a curve (i think it's gonna be y= a*exp(bx) ) by using fminsearch, and find the parameters. I did it by using "cftool" but i couldnt do it with fminsearch. It would be great if you tell me how can i do that? thanks
f=@(x) a*exp(b*x);
x0=[2,1];
options = optimset('PlotFcns',@optimplotfval, 'Display','iter');
x = fminsearch(f,,x0,options)

Sign in to comment.

 Accepted Answer

Study this example
x = [3, 5, 7, 10, 13, 17, 20, 23, 25, 29, 31];
y = [1.1 , 2.0 , 3.7 , 9.0 , 22.2 , 73.8 , 181.5,446.5 , 813.6 , 2701.3 , 4922.1];
f = @(a,b,x) a*exp(b*x);
obj_fun = @(params) norm(f(params(1), params(2), x)-y);
sol = fminsearch(obj_fun, rand(1,2));
a_sol = sol(1);
b_sol = sol(2);
figure;
plot(x, y, '+', 'MarkerSize', 10, 'LineWidth', 2)
hold on
plot(x, f(a_sol, b_sol, x), '-')

3 Comments

This is what appears to be a reasonable fit. At the same time, it is often a good idea to plot things on a log scale when you have data that varies over such a wide range. Why?
If this model is truly an exponential one, then it will appear linear when plotted using semilogy. Effectively, this transforms the problem so that:
log(y) = log(a ) + b*x
And since we don't know what a is, then log of a is just as much an unknown constant.
P1 = polyfit(x,log(y),1)
P1 =
0.300154799247339 -0.802338147307636
Now, we recover a by the simple transformation
a = exp(P1(2))
a =
0.448279594078463
b = P1(1)
b =
0.300154799247339
At the very least, these will be great starting estimates for a nonlinear search.
All of that applies for this simple model as chosen, IF that is a reasonable model. The nice thing is, polyfit is a good way to fit that data here.
semilogy(x,y,'o')
grid on
And, well, it looks about as close to linear as I could possibly imagine. But it is always right to confirm that with a plot.
The problem is, what if the curve at the bottom end tails off, or is not really truly an exponential one? For example, if we were to tweak those bottom data points a bit, they would still have absolutely no impact of the model parameters. This is because the fit will be dominated by the big numbers for the data on the top end.
The idea is that the standard least squares fit (here done using fminsearch) implies a normally distributed, additive error structure. However, when you have data like this, a proportional error structure is much more likely. And when you log the model, this implicitly transforms a proportional (lognormal) error stucture into a simple additive normal noise structure.
Other virtues of the use of polyfit here is that you don't need starting values at all, and there is no need to perform any iterative search. Iterative searches using exponential models can sometims be nasty things because of the exponentials. You need to have good starting values, else you may get divergence or infs - but polyfit has no need at all for them.
Simple rule: WHENEVER the data varies by more than an order of magnitude or more - ALWAYS check to see if logging the model agrees with the conclusion you would draw otherwise.
In any case, just about any way you fit this data, the line is so straight when you log y, you will get a decent answer. You cannot go wrong, at least not today. However, tomorrow is another day.
That a very useful observation. Many times differences in scale of the data points also make it difficult for optimizers to find an optimal solution. Such modifications can make things easy for the optimizer.
Thankyou very much Ameer and John. Your answers help me a lot both solve the question and understand the concept. Thank you for your time and effort again.

Sign in to comment.

More Answers (1)

Ang Feng
Ang Feng on 18 May 2020
Hi Burak,
This link is certainly helpful
Matlab has very good documentation.
Good luck

1 Comment

Hi Ang,
Thanks for your answer. I will check this documentation

Sign in to comment.

Categories

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!