Why does MATLAB even have single precision variables?

14 views (last 30 days)
If you use one "single precision" variable and intend to use it at any time within any future calculations, all products and sums including that single variable will also be single variables. It almost seems like if you start using one variable in single precision, all of your outputs could potentially be single variables as well.
It sort of reminds me of the following imaginary scenario: a person gathering data has two sizes of paper on which to write numbers observed--one is ten times the size of the other. It takes more effort (time and energy) to haul around the larger sheet of paper, so you would only use it if you had very precise measurements to record.
For any who doubt that calculations with single precision variables are faster, try the following set of commands:
m1 = rand(999,999);
m2 = rand(size(m1));
tic
m3 = m1 .* m2;
toc
m1 = single(m1);
m2 = single(m2);
tic
m4 = m1 .* m2;
toc
My results indicate a decrease in calculation time of at least one order of magnitude, if not two or more.
  6 Comments
Guillaume
Guillaume on 3 Sep 2014
In all likelyhood, the performance of single vs double will depend on the processor, size of data being operated on, how the function doing the processing has been compiled (is it implemented using MMX, SSE, SSE2, SSE3 or plain FP), etc. There's only one way to know for sure, profile your specific code. Do not rely on profiling done by somebody on a completely different piece of code
Seth Wagenman
Seth Wagenman on 3 Sep 2014
Edited: Walter Roberson on 10 Jul 2018
I saved the function as the attached .m file. If a median processing time with double precision quantities were, say, five times longer than median single processing time, the following script would produce 400, would it not?
dt=zeros(20,2);for i=1:20,dt(i,:)=singdub;end,[min(dt);md=median(dt);(md(1)/md(2)-1)*100
Therefore, what is the point of calculating 400? To me, the more interesting quantity is the ratio of 5 to 1...

Sign in to comment.

Accepted Answer

Adam
Adam on 3 Sep 2014
I'm not quite sure what you are asking really.
I am starting to use 'single' more and more often nowadays because we work with very large datasets which, while stored in file as integers, have to be converted to floating point in Matlab to do maths, but since a double takes twice the memory of a single this is an excessive use of memory quite often.
The extra precision that a double gives is rarely relevant and especially given that at the end of whatever calculation I am doing the final result will be converted back to integer anyway with the requisite loss of accuracy.
Propagation of the data type from the start through to its results is not exactly uncommon though. In C++ you generally have to cast to a different data type (or accept warnings in your code) if you want the result in a different type to the original variable.
  2 Comments
Seth Wagenman
Seth Wagenman on 3 Sep 2014
I was asking the question because some opinions that "single" precision is useless exist. I have seen encouragement to essentially ignore it as standard practice.
José-Luis
José-Luis on 3 Sep 2014
That encouragement probably comes from people that have been bit by bad typecasting. If everyone used double then those problems wouldn't exist. Conformity to a norm takes away flexibility on the other hand.
It depends on your problem. If precision is an issue, then maybe even double is not enough. If you have a data intensive application then using single may be the way to go.

Sign in to comment.

More Answers (1)

David Young
David Young on 3 Sep 2014
Edited: David Young on 3 Sep 2014
For most purposes, double precision is good because calculations are less likely to be affected by rounding errors (though it's still important to keep them in mind). However, reducing memory use is sometimes more important (for example if handling large images) and then single precision can be a better choice. This is a well-understood trade-off.
The more interesting question with respect to MATLAB is about type conversion rules. In C, for example, if a binary operator has a float (i.e. single) operand and a double operand, the float is converted to double before the operation is done. In MATLAB, on the other hand, the double is converted to a single. More generally, C operates using "promotion" of integer to floating and lower to higher precision, while MATLAB carries out "demotion" of scalar doubles to matching types.
I suspect that this is because MATLAB has a very strong sense of double as the normal, default, standard type. The assumption is therefore that if you are using some other type you have a good reason to do so, and so you want that type to propagate throughout the computation, rather than finding you have an accidental promotion to double. This seems sensible.
  4 Comments
Seth Wagenman
Seth Wagenman on 3 Sep 2014
I understand this. It also makes sense because if you multiply two quantities with differing numbers of significant digits together, the answer has the lesser number of significant digits--and MATLAB sort of mimics that behavior with its double/single multiplication (and addition) behavior, correct?
dpb
dpb on 3 Sep 2014
Edited: dpb on 4 Sep 2014
Yes, for a loose definition of "sorta'"... :) It maintains the same working precision of result...just as with a double, depending on the computations involved, what the actual precision of the result is may not be anything approaching the number of digits in the result. See the Matlab writeup and some of Cleve's other writings starting at
You'll likely find the couple of sections on accuracy and avoiding problems of particular interest and apropos to the question and the discussion.
I'll hypothesize the reason for Matlab beginning with and using double as default goes back to the basic idea of being a "MATrix LABoratory" application that was intended to be able to be used first pedagogically and then later more for actual applications. For the former, not having to worry needlessly about precision means can, for the most part, write stuff in Matlab that mimics the actual matrix operations and expect to get reasonable results.
It is also for applications wherein there are many who solve systems of stiff DEs or otherwise ill-conditioned systems of many equations wherein precision is everything or the result is likely nothing. We still regularly get postings from those whose problems are beyond even what double can do in straightforward application--just a week or so ago a query came up from an individual for which the condition number of his problem matrix is otoo 10E17 even after regularization. There are hard problems out there.
As for propogating thru without requiring explicit casting, that's a (relatively) recent feature...as recently as R12
>> x=single(pi);
>> y=2*x;
??? Error using ==> *
Function '*' not defined for variables of class 'single'.
>>
I don't know which release actually introduced single as a full-fledged numeric class, I have no versions between R12 and R2012b, a span of roughly 10 yr or so...

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!