Clear Filters
Clear Filters

How to get the rightmost root in the interval?

5 views (last 30 days)
There is a function (y=@(x)...) of the form shown in the figure, which has exactly two roots in a given interval.
How to get the largest root in the interval?
One way is to first find the minimum of a function using fminbnd to then divide the interval into 2 parts and to search in the right part of the interval using fzero.
How to do this without calling fminbnd (to minimize computer time)?

Accepted Answer

KSSV
KSSV on 18 Jun 2020
How about finding the intersection points of the given curve and the line y = 0 ? You can use this to get intersection points.
  1 Comment
bbb_bbb
bbb_bbb on 18 Jun 2020
So far, the best solution for me. It requires 10 times less computation time than combination of fminbnd+fzero, and 5 times less than one call of fzero, with comparable accuracy.

Sign in to comment.

More Answers (2)

madhan ravi
madhan ravi on 18 Jun 2020
Edited: madhan ravi on 18 Jun 2020
max(fzero(y, 2))
  3 Comments
bbb_bbb
bbb_bbb on 18 Jun 2020
There are a number of restrictions, for example, you can't use interp1 as a function for vpasolve...

Sign in to comment.


John D'Errico
John D'Errico on 18 Jun 2020
Edited: John D'Errico on 18 Jun 2020
You claim you want to find the right-most root. If you have an interval, the simple solution might be to just start fzero off using ONE start point, instead of a bracket. That is, start fzero off at the right hand end point of your bracket. For example, this should produce the positive sqrt(2) as a solution:
fzero(@(x) x.^2 - 2,5)
ans =
1.41421356237309
In fact, much of the time this will suffice. I'd even bet it will work far more often than it will give you the undesired bad root.
But I can probably construct a case where it would fail, and give you the wrong root. That is the problem with understanding the way code is written. When you understand a piece of software, you can often then construct cases that would cause an algorithm to do exactly what you don't want to happen. And since I've even written variations of the fzero code in the past, I can claim to understand the theory behind fzero reasonably well.
So if the above solution is inadequate for you, and you are worried that sometimes it may fail, you might have some options. But everything you will do will cost you some time. How could it not? The more confidence you want in your solution, the more it will cost you.
Assuming that you absolutely positively know the function has the indicated shape, then you could use fminbnd as you have suggested. Once you have the minimum, then it gives you brackets to find either root. This will cost roughly the same time as a time penalty as would an extra call to fzero. But it will always give you the correct answer as long as fun has the shape you indicate.
Another idea is to call fzero once. Get a solution, then deflate the root you just found. For example, suppose I got the wrong solution to the sqrt problem?
fun = @(x) x.^2 - 2;
x1 = fzero(fun,-5)
x1 =
-1.41421356237309
Yes. I know, that was silly of me. I used a starting value that put me in the wrong place. Now, deflate that root. Just kill it off. Zap it away, as it it never existed. :)
fun2 = @(x) fun(x)./(x - x1);
x2 = fzero(fun2,-5)
x2 =
1.41421356237309
So the second call to fzero found the second root. Then you will choose the larger of the two roots. Yes, I know that this costs me TWO calls to fzero. But if you want to insure that you ALWAYS find the larger root, and if you are worried that just starting off at the upper end point of your interval will sometimes fail, then the cost is more computing time.
Another idea is that suggested by KSSV. Use a tool that willl locate both roots based on the simple piecewise linear approximations inside one of the many tools on the file exchange to find those intersection points. Remember of course that this is ONLY a low order approximation. It will not be terribly accurate. For example, since I happen to keep a copy of Doug Schwarz's intersections utility from the file exchange:
xi = linspace(-5,5,20);
x12 = intersections(xi,fun(xi),[-5,5],[0 0])
x12 =
-1.40087719298246
1.40087719298246
Of course, they are not terribly accurate. If I wanted more accuracy, I could have used more points when creating the vector xi. But now I can use fzero to refine the rightmost solution
xfinal = fzero(fun,max(x12))
xfinal =
1.41421356237309
The initial cost can be more significant than you think.
xi = linspace(-5,5,1000);
x12 = intersections(xi,fun(xi),[-5,5],[0 0])
x12 =
-1.41420746988123
1.41420746988123
So those somewhat more accurate solutions cost me 1000 function evaluations! And in the end, I got only about 5 decimal digits of accuracy.
sqrt(2)
ans =
1.4142135623731
Still, there are often more alternatives, that is, if you look diligently enough. Do you have an analytical form for this function? Can you compute the derivative?
For example in the sqrt(2) case, we can differentiate x^2 - 2, as 2*x. I could even use the symbolic toolbox to do that. It is pretty easy here though, and I can be really lazy some days.
fundiff = @(x) 2*x;
Now, suppose we compute the root, starting fzero out at the upper endpoint.
x1 = fzero(fun,5)
x1 =
1.41421356237309
Of course, lets pretend we really don't know if this is the root we wanted. Is it the right one? Worry, worry, worry. The answer is quick and easy to test.
fundiff(x1)
ans =
2.82842712474619
If fundiff(x1) is POSITIVE, then as long as we know the function has the indicated shape, you are done. If fundiff(x1) had come out negative, then we would know the wrong root have been found, and only then would we need to look again. I might use the deflation trick. Or I could use fminbnd to find the min, and then use it to create a bracket where we know the desired root will be contained.
The point is, by computing the derivative of the function at the root we found, you can now be confident we found the root that will make you happy. The cost will only be a single derivative evaluation. If the derivative has the wrong sign (i.e., it is negative) only then do you need to spend more time to insure you get the desired root. I suppose if you did not have an analytical form for the derivative, a finite difference would suffice. Thus, using a central finite difference, we get:
fundiff2 = @(x,dx) (fun(x + dx) - fun(x - dx))/(2*dx);
fundiff2(x1,sqrt(eps(x1)))
ans =
2.82842712104321
Don't use too small a value for dx there, as you can get into subtractive cancellation issues. sqrt(eps(x1)) is probably a good choice for dx here.
So you can solve the problem, at not too large of an incremental cost.
  2 Comments
bbb_bbb
bbb_bbb on 18 Jun 2020
Thank you for a detailed answer. For now idea that suggested by KSSV is the best for me.
All others lead to too much computation time, or seem unreliably.
The accuracy is more than satisfactory for me. It would also be interesting to take a look at
the intersections utility that you mentioned.
John D'Errico
John D'Errico on 18 Jun 2020
Searching for intersections using any such tool is also potentially unreliable. I can create circumstances where it too will fail. The accuracy you will get out of it is poor, limited to the number of points you generate.
But that is entirely your choice and I must respect it, even if I don't really agree with it, nor would I really recommend that choice. :)
It is not an actively bad solution and I even discuss it in my answer. When goodness is defined in terms of sbsolute speed, it can even be the preferred solution. What really matters is to know your requirements and the limitations of the choices you make.

Sign in to comment.

Categories

Find more on Historical Contests in Help Center and File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!