Accuracy of doubles frustrating

M

Michael Lang

You'd think that the advantage to extra decimal places in a data type is
better accuracy. Well think again. Look at this method and test
results:

===========================================================
// not just for rounding decimals, but for getting nearest
// whole number values. IE. Nearest(443, 5) = 445
public static double Nearest(double source, double nearest)
{
//method 1 - uses modulus operator
// (remainder from division)
double rem = source % nearest;
double dest = source - rem;

//method 2 - get remainder without modulus
double times = source / nearest;
int iTimes = (int)SM.Round(times,0);
//double times2 = (double)iTimes;
double dest2 = iTimes * nearest;

//see if Tostring and back helps?
string sDest2 = dest2.ToString();
double dest2fs = double.Parse(sDest2);

return dest2;
}
===========================================================

test with 134.111 and .1 (works perfectly both ways):

source 134.111 double
nearest 0.1 double
rem 0.010999999999982552 double
dest 134.1 double
times 1341.11 double
iTimes 1341 int
dest2 134.1 double
sDest2 "134.1" string
dest2fs 134.1 double

test with 55.35 and .1 (round errors both ways):

source 55.35 double
nearest 0.1 double
rem 0.049999999999998351 double
dest 55.300000000000004 double
times 553.5 double
iTimes 554 int
dest2 55.400000000000006 double
sDest2 "55.4" string
dest2fs 55.4 double

test with 35.5 and .1 (works 2nd way, flaw 1st way):

source 35.5 double
nearest 0.1 double
rem 0.099999999999998035 double
dest 35.4 double
times 355.0 double
iTimes 355 int
dest2 35.5 double
sDest2 "35.5" string
dest2fs 35.5 double

I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???

What I find very strange is that dest2 in test2 has 15 decimal places,
but the ToString() method chops off everything after the first zero. Or
it could just be rounding to the nearest 14 before the toString? either
way would be the same result in this case.

Or is there just something wrong with my processor (mobile pentium 4,
3.02ghtz from Compaq)? Can you (anyone) verify my results on your
machine? If same results, can anyone explain?

Michael Lang
 
M

Michael Lang

Michael Lang said:
===========================================================
// not just for rounding decimals, but for getting nearest
// whole number values. IE. Nearest(443, 5) = 445
public static double Nearest(double source, double nearest)
{
//method 1 - uses modulus operator
// (remainder from division)
double rem = source % nearest;
double dest = source - rem;

//method 2 - get remainder without modulus
double times = source / nearest;
int iTimes = (int)SM.Round(times,0);
//double times2 = (double)iTimes;
double dest2 = iTimes * nearest;

//see if Tostring and back helps?
string sDest2 = dest2.ToString();
double dest2fs = double.Parse(sDest2);

return dest2;
}
===========================================================

By the way, one logical change (no change in previous results):

=========================================
//method 1
double rem = source % nearest;
double dest;
if (rem >= nearest/2)
{
dest = source - rem + nearest;
}
else
{
dest = source - rem;
}
=========================================

Another devastating test result:

Call Nearest(134.1, .2). should get "134.2", but get:

source 134.1 double
nearest 0.2 double
rem 0.099999999999986877 double
dest 134.0 double
times 670.49999999999989 double
iTimes 670 int
dest2 134.0 double
sDest2 "134" string
dest2fs 134.0 double

134.1 / .2 should equal 670.5 which would round up, but since it is just
under 670.5, it rounds down!

Michael Lang
 
F

Fergus Cooney

Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of all things wasn't accurate. I'll use the reply that I
gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the digits
after the 'binary point' go on somewhat longer than the few bytes available.
Thus the very fact of <storing> 5.1 is going to introduce an error - let alone
multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from fractional
powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's still not
accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of two.
</quote>

Now if you consider that you have two numbers in your function, either or both may be suffering from that inherent loss
of accuracy - right from the word go. The result can be a compounded error.

Another factor that you may not be aware of is that rounding in .NET is done to the nearest even number. Once upon a
time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not anymore - they round like so: 2, 2, 4, 4, 6, 6!! The
idea is that by alternately rounding up and then down, the accuracy is greater overall. Unfortunately that may be so in
heavy-duty calculations but in day-to-day programming that is unexpected behaviour - ie. a bug waiting to be discovered.

Regards,
Fergus
 
W

William Ryan

Ferg:

I missed that post, but I just walked through it. Very enlightening!
Fergus Cooney said:
Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of all
things wasn't accurate. I'll use the reply that I
gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the digits
after the 'binary point' go on somewhat longer than the few bytes available.
Thus the very fact of <storing> 5.1 is going to introduce an error - let alone
multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from fractional
powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's still not
accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of two.
</quote>

Now if you consider that you have two numbers in your function, either
or both may be suffering from that inherent loss
of accuracy - right from the word go. The result can be a compounded error.

Another factor that you may not be aware of is that rounding in .NET
is done to the nearest even number. Once upon a
time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not
anymore - they round like so: 2, 2, 4, 4, 6, 6!! The
idea is that by alternately rounding up and then down, the accuracy is
greater overall. Unfortunately that may be so in
heavy-duty calculations but in day-to-day programming that is unexpected
behaviour - ie. a bug waiting to be discovered.
 
M

Michael Lang

Hi Michael,

We had a query in the languages.vb group about why 5.1 * 100 of
all things wasn't accurate. I'll use the reply that I
gave then.

<quote>
5.1 * 100 is so obviously 510 to us because we think in decimal.

Convert 5.1 to a binary floating point number, however, and the
digits
after the 'binary point' go on somewhat longer than the few bytes
available. Thus the very fact of <storing> 5.1 is going to introduce
an error - let alone multiplying it.

You'd think that it wouldn't need many bits to store 5.1 given
that 51
only needs 6!

51 = 32 + 16 + 2 + 1 = 110011

But 5.1 is more complicated than this as it is built up from
fractional
powers of two:

5.1 = 4 + 1 + 1/16 + 1/32 + 1/256 + 1/512 + 1/2048 + ...

= 11.00011001101000 ...

We've already got to 14 digits after the 'binary point' and it's
still not
accurate, being only 5.09999765625.

Some numbers just don't like being converted to sums of powers of
two.
</quote>

Now if you consider that you have two numbers in your function,
either or both may be suffering from that inherent loss
of accuracy - right from the word go. The result can be a compounded
error.

Another factor that you may not be aware of is that rounding in
.NET is done to the nearest even number. Once upon a
time 1.5, 2.5, 3.5, 4.5, 5.5, 6.5 would round to 2, 3, 4, 5. 6, 7. Not
anymore - they round like so: 2, 2, 4, 4, 6, 6!! The idea is that by
alternately rounding up and then down, the accuracy is greater
overall. Unfortunately that may be so in heavy-duty calculations but
in day-to-day programming that is unexpected behaviour - ie. a bug
waiting to be discovered.

Regards,
Fergus


So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy. What do all the
highly scientific applications do?

Even calc.exe that comes with Windows will get the correct answer. So
how does it do it?

If there is a problem storing numbers as is, then why aren't they stored
differently? Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.

If you were back in elementary school, you would be taught to multiply as
follows:

10.01
x 5
-----
5005 then shift decimal by the number of total decimal digits in both
source numbers, in this case 2, which equals 50.05.

Also, more detailed example...
55.2
x 4.5
------
552 x 5 = 2760 with 2 digit shift (27.60)
+552 x 40 = 22080 with 2 digit shift (220.80)
-zero added to end of 4 for each "place" after it
----------------
24840 with 2 digit shift (248.4)

or
55.2 x 4.5 = 552 x 45 = 24840 with 2 digit shift = 248.4

This way computers and humans would both "think" of numbers in the same
way, and we would always get the same EXACT result. Or am I missing
something?

Couldn't Microsoft create a new numeric datatype that used the
mathematical concepts above that had just as high of a max value as
double and used 64 bits? (or as high as a float using 32 bits?)

MIchael Lang
 
J

Jon Skeet [C# MVP]

Michael Lang said:
I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???

It still can't represent those numbers accurately, just as with as many
decimal places as you like, you can't represent a third accurately.

See http://www.pobox.com/~skeet/csharp/floatingpoint.html and
http://www.pobox.com/~skeet/csharp/decimal.html
 
J

Jon Skeet [C# MVP]

Michael Lang said:
So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy. What do all the
highly scientific applications do?

If they care about getting exactly correct decimal values, they use a
decimal type, probably their own.
Even calc.exe that comes with Windows will get the correct answer. So
how does it do it?

It doesn't use binary floating point, presumably.
If there is a problem storing numbers as is, then why aren't they stored
differently?

Because storing numbers as binary floating point is efficient, both in
space and storage.
Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.

That's basically what the decimal type does. However, it's both slower
and bigger than double. For most scientific applications, the decimal
representation doesn't matter - as soon as you do things like dividing
by three you'll get the same problem anyway. They care about how close
the actual answer is to the theoretical answer, and double is
reasonably good for that, as well as being fast.
 
P

Peter van der Goes

Michael Lang said:
You'd think that the advantage to extra decimal places in a data type is
better accuracy. Well think again. Look at this method and test
results:
I could understand losing decimal accuracy with double values nearing the
double.MaxValue as that leaves less memory for the decimal places, but
for numbers less than 100 and only 2 decimal places, why is there a
problem???

What I find very strange is that dest2 in test2 has 15 decimal places,
but the ToString() method chops off everything after the first zero. Or
it could just be rounding to the nearest 14 before the toString? either
way would be the same result in this case.

Or is there just something wrong with my processor (mobile pentium 4,
3.02ghtz from Compaq)? Can you (anyone) verify my results on your
machine? If same results, can anyone explain?

Michael Lang

Definitely not your CPU :)
To put this in perspective, the phenomenon occurs in C, C++, C#, Java and
other languages, not just in .NET. As others have posted, it's the result of
how real numbers are stored in memory.
 
A

Alan Pretre

Michael Lang said:
So, there is no built in cure for this problem? I still don't understand
why accuracy isn't important in mathmatical computing? It sorta defeats
the whole purpose of computing without the accuracy.

If you are going to work with floating point values on a computer there is a
value called epsilon that you should become acquainted with.

See, for example.
http://www.ma.utexas.edu/documentation/lapack/node73.html

If long chains of floating point calculations need to be performed, one
approach is to keep accumulate roundoff differences and add them back in
occasionally to make the results more accurate.
What do all the highly scientific applications do?

It is customary to perform calculations at higher precision than is needed,
because then roundoff doesn't factor in. If very high precision is required
then extended precision techniques need to be employed.

My CompSci MS Thesis involved extended precision division algorithms.
http://webpals.sdln.net/cgi-bin/pal...___/au Pretre, Alan./SAGE 0/MAXDI 2/di 0002
If there is a problem storing numbers as is, then why aren't they stored
differently? Why not store 5.1 as "51" and "1". The "1" being the
number of decimal places to offset to get the real number.

The decimal data type uses fixed point, rather than floating point, and is
meant to address some of the issues.

-- Alan
 
R

R Warford

So why does 0.56 + 0.39 = 0.95 when computed using Visual C++ doubles,
but 0.950000000000000007 when using C#?

-Roger
 
J

Jon Skeet [C# MVP]

R Warford said:
So why does 0.56 + 0.39 = 0.95 when computed using Visual C++ doubles,
but 0.950000000000000007 when using C#?

I think you'll find it doesn't actually get to 0.95 using Visual C++
either - it's just being displayed like that.

(It's not exactly 0.950000000000000007 in C# either, it's actually
0.95000000000000006661338147750939242541790008544921875.)
 
R

R Warford

Of course, I should have looked deeper. Thanks!
I find the VC++ display more comforting, though! :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top