Float/Double arithmetic precision error

M

Madan

Hi all,
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions. I would like to know how the
arithmetic operations are internally handled by C# or they are hardware
(processor) dependent. Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting. I'm using C#.Net v1.1 Please
reply me how these operations are handled internally.
Thanks in Advance,
Madan.
 
J

Jon Skeet [C# MVP]

Madan said:
I had problem regarding float/double arithmetic only with + and -
operations, which gives inaccurate precisions. I would like to know how the
arithmetic operations are internally handled by C# or they are hardware
(processor) dependent. Basic addition operation errors, for ex:
3.6 - 2.4 = 1.19999999999 or 1.20000000003
There are the erroneous values I'm getting. I'm using C#.Net v1.1 Please
reply me how these operations are handled internally.

See http://www.pobox.com/~skeet/csharp/floatingpoint.html
 
R

Randy A. Ynchausti

Madan,

It may be symantics, but to me the representation of floating-point
precision numbers in a "binary" machine is a the problem, not necessarily
the precision of the number, unless you want to specify exactly how many
decimal places are significant for every floating point number and any
calculation that involves floating point numbers.

The problems with floating-point number representation are not specific to a
particular language. They are specific to representing floating-point
number on binary computers. Essentially, you are representing a continuous
number in a discrete environment. The effective approach is to break up the
range of values that a floating-point number can have into tiny intervals
(the precision the machine you are running the code on.).

Before declaring these erroneous, you should specify what precision you are
looking for. They are correct to 10 decimal places. If you want higher
precision, use a double. If you want exact precision, use the
System.Decimal class and specifiy the number of decimal places.

It is bad programming form to compare floating-point numbers and results
using the traditional "==" operator. You should calculate the machine
precision and then check to see if the difference of two floating-point
numbers is smaller than some small value based on the machine precision.

Hope that helps.

Regards,

Randy
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top