Rounding error casting from float to int

6

6tc1

Hi all, I just discovered a rounding error that occurs in C#. I'm sure
this is an old issue, but it is new to me and resulted in a fair amount
of time trying to track down the issue.

Basically put the following code into your C# app:
float testFloat2 = (int) (4.2f * (float)100);
Console.Out.WriteLine("1: "+testFloat2);

and the result will be 419

If I use Convert.ToInt32 I can get around this problem - but it seems
that both should do the same thing, shouldn't they?

Novice
 
B

Bob Milton

Actually, it's not a C# problem at all. Floating point numbers are
rarely exact. So 4.2 is actually something like 4.199999999. Thus the result
you see.
Bob
 
K

KBuser

Hi all, I just discovered a rounding error that occurs in C#. I'm sure
this is an old issue, but it is new to me and resulted in a fair amount
of time trying to track down the issue.

Basically put the following code into your C# app:
float testFloat2 = (int) (4.2f * (float)100);
Console.Out.WriteLine("1: "+testFloat2);

and the result will be 419

If I use Convert.ToInt32 I can get around this problem - but it seems
that both should do the same thing, shouldn't they?

Novice

http://www.math.grin.edu/~stone/courses/fundamentals/IEEE-reals.html
 
6

6tc1

No offense to the posters (Bob and KBuser), but you haven't really
answered my question. Perhaps I could have phrased it a little better.
I was more interested in why I get a different result from the two
different methods specific to .NET. I'm sure I would get the result if
I wrote equivalent VB.NET - but I happen to program in C#.

Perhaps, I wasn't being clear enough in my original post, if I use
Convert.ToInt32 then I get 420, not 419. So obviously the Convert
method is doing something to make the result more accurate.

Anyway, if anyone knows why I get a different result please feel free
to post it.

Thanks,
Novice
 
V

vj

Really not sure why you are type casting to (int) and then storing in a
float variable.. Can I know why?

VJ
 
6

6tc1

No reason - you get the same behavior regardless whether the variable
being assigned to is of type float or int.

Novice
 
D

Doug Forster

If I use Convert.ToInt32 I can get around this problem - but it seems
that both should do the same thing, shouldn't they?

Nope. (int) truncates whereas Convert.ToInt32 rounds

Cheers
Doug Forster
 
J

Jon Skeet [C# MVP]

No offense to the posters (Bob and KBuser), but you haven't really
answered my question.

Well, they answered why you got the answer 419. You're multiplying
4.19999980926513671875 by 100. There's no rounding *error* - there's
just a different method of rounding for casts than for Convert.ToInt32.

Perhaps, I wasn't being clear enough in my original post, if I use
Convert.ToInt32 then I get 420, not 419. So obviously the Convert
method is doing something to make the result more accurate.

Well, it's choosing a different method of rounding.

From the docs for Convert.ToInt32:

<quote>
value rounded to the nearest 32-bit signed integer. If value is halfway
between two whole numbers, the even number is returned; that is, 4.5 is
converted to 4, and 5.5 is converted to 6.
</quote>

From the C# 1.1 language spec, (ECMA numbering) section 13.2.1:

<quote>
For a conversion from float or double to an integral type, the
processing depends on the overflow-checking context (§14.5.12) in which
the conversion takes place:

....

In an unchecked context, the conversion always succeeds, and proceeds
as follows:

If the value of the source operand is within the range of the
destination type, then it is rounded towards zero to the nearest
integral value of the destination type, and this integral value is the
result of the conversion.
</quote>

So, one is rounding to nearest, the other is rounding towards zero.
 
6

6tc1

Doug summarizes it rather well. The one truncates and the other
rounds. In other words, you could get the int cast (int) to result in
a cast to zero if the number was something like this:
4.20000000000000000009
it simply ignores the least significant bits. Whereas the
Convert.ToInt32 would get the same result, because rounding the above
number still results in 420.

So because of the inaccuracies of the representation of numbers - it
seems like the Convert method would yield the expected results in all
cases. Or can someone think of some way in which the Convert method
would yield the unexpected and the int cast would yield the expected?

Thanks,
Novice
 
J

Jon Skeet [C# MVP]

Doug summarizes it rather well. The one truncates and the other
rounds. In other words, you could get the int cast (int) to result in
a cast to zero if the number was something like this:
4.20000000000000000009
it simply ignores the least significant bits. Whereas the
Convert.ToInt32 would get the same result, because rounding the above
number still results in 420.

So because of the inaccuracies of the representation of numbers - it
seems like the Convert method would yield the expected results in all
cases. Or can someone think of some way in which the Convert method
would yield the unexpected and the int cast would yield the expected?

Yes - if you had a series of calculations where the result stored in
the variable became gradually further and further away from the
"correct" result, it could end up being (say) 11.6 instead of 10. At
that point, casting would round down to 11, and Convert.ToInt32 would
round up to 12. However, for any given actual float/double value, the
result given Convert.ToInt32 will be at least as close to the actual
value as the one given by casting. However, casting is likely to be
faster.
 
6

6tc1

Jon said:
Yes - if you had a series of calculations where the result stored in
the variable became gradually further and further away from the
"correct" result, it could end up being (say) 11.6 instead of 10. At
that point, casting would round down to 11, and Convert.ToInt32 would
round up to 12. However, for any given actual float/double value, the
result given Convert.ToInt32 will be at least as close to the actual
value as the one given by casting. However, casting is likely to be
faster.

I guess, I'm not thinking in terms of proximity to the "actual" value
as I'm thinking proximity to the "right" value (not "right" according
to the bit representation, but "right" according to the numbers and
calculations I'm doing in base 10). I guess for your series of
calculations example, we could contrive an example where a user is
using a float number that they think is say
1.5
but internally it is represented as (I realize 1.5 isn't represented as
1.5111111 in binary, but I was too lazy to find a rounding up example
that results in your series of calculations error)
1.5111111

Then, you could conduct a series of add operations with 1.5 (actually
1.5111111) and have it get to the point where the number is
9.0666666

in which case the user would get 9.1

Last question, when I use a float, it only shows me 7 decimal places,
but from your example it looks as if behind the scenes float actually
stores more than 7 places, is that correct?

Thanks,
Novice
 
J

Jon Skeet [C# MVP]

(e-mail address removed) wrote:

Last question, when I use a float, it only shows me 7 decimal places,
but from your example it looks as if behind the scenes float actually
stores more than 7 places, is that correct?

Well, it stores an exact number, which may only be exactly
representable using more than 7 decimal places - even if there aren't
any other floats which would be the same for the first 7 or 8 decimal
places.

See http://www.pobox.com/~skeet/csharp/floatingpoint.html for more on
this, including some code to show you the exact value of a float or
double.

Jon
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top