32 views (last 30 days)

Hello,

I have a single (VERY LONG) line of code written in both C++ and matlab that evaluates a mathematical expression. The expression is just under 31,000 characters long, so it's pretty darn complicated.

When I evaluate this expression in matlab vs. C++ I get very different numbers:

MATLAB: 31856235311612.7

C++: 31745938896594.422

However, as you'll probably notice, they're not different enough to think I made a mistake in the transfer. I basically just copied and pasted the expression from C++, so no issues there.

I have two similar expressions of lower order that I also ported; one on the order of 10^9 (differs by about 1%), and one of order 10^7 which does not suffer the same magnitude of error:

MATLAB=3087973.34551207

C++=3087973.3455120716

The question is, which is "correct", and from where are these differences arising? I'm happy to provide code, but the first entry code is a 31,000 character expression!

FYI, in C++ all variables are stored as doubles.

Jan
on 29 Oct 2011

A much simpler example:

MATLAB uses FDLIBM for the calculation of ACOS. I've added this library to a C-mex file also and compared the results of:

a = [ 0.82924747834055368, ...

0.55849526207902467, ...

-0.020776474703722032];

b = [-0.82889260818746668, ...

-0.55895647212964961, ...

0.022465670623305872];

acos(dot(a, b))

I found differences between the values calculated by C and Matlab up to 32768 EPS, depending on the C-compiler and the input values.

Let's look on the sensitivities:

b_var = b + [0, eps, 0];

acos(dot(a, b)) - acos(dot(a, b_var))

>> 1.2434e-013

This means that tiny differences in the input are amplified by the factor 560. This seems to be small. But together with some cancellation effects, such differences can explode: [EDITED]

r = 1 / (3.13980602793225 - acos(dot(a, b)));

>> r = 2.2518e+014

r_var = 1 / (3.13980602793225 - acos(dot(a, b_var)));

>> r_var = 7.7648e+012

Now the analysis of the sensitivity would reveal, that the algorithm is such instable, that it cannot be used to determine a single siginificant digit.

Conclusion: Calculating values using a 31'000 characters term without an analysis of the sensitivity is not meaningful. For a high sensitivity the results have a large larger random part.

Jan
on 29 Oct 2011

@M S: See the EDITED part. "Small" is a very relative term in numerics: subtract two "small" numbers from each other and build the reciprocal - suddenly it looks really large.

The variation of the inputs allows to estimate, if the solution is chaotic or near to chaotic in the first order approximation. If a solution is very sensitive, I hesitate to call it "solution". Imagine I simulate a pencil standing on the tip at first. Would you trust my work, if I claim that my simulation shows without doubt, that the pencil will fall to the left?!

No. A numerical result without sensitivity analysis is not scientifically meaningful. If the programming language has such a big influence, I doubt that the solution is stable. I do not think that you can analyse a 31'000 character equation symbolically, a numerical sensitivity test is obligatory.

Sign in to comment.

James Tursa
on 29 Oct 2011

James Tursa
on 1 Nov 2011

Sign in to comment.

Sign in to answer this question.

Opportunities for recent engineering grads.

Apply Today
## 0 Comments

Sign in to comment.