simple casting problem (with small code example)

  • Thread starter Thread starter buzzweetman
  • Start date Start date
B

buzzweetman

Can someone explain this:

void Test()
{
short x = 10;
float y = 9.9f;

float a = (float)x;
a *= y;
short b = (short)a; // I get 99. Good!

short c = (short)(((float)x) * y); // I get 98. Not so
good.
}

In both cases my logic is to cast the short to float, do floating
multiplication, and cast the result to short.

Why do I get different results?
Buzz
 
In both cases my logic is to cast the short to float, do floating
multiplication, and cast the result to short.

Why do I get different results?

I assume you're running on .NET Framework 1.x. Under 2.0 I get 98 in
both cases, which actually is correct even if it's not what you
expect.

The root cause is that not all numbers can be stored exactly when
using floating point representation. Read more here
http://www.yoda.arachsys.com/csharp/floatingpoint.html

In this case, 9.9 is stored as something closer to 9.899.....

Casting the result back to short discards the decimals without any
rounding, so the result becomes 98.


Mattias
 
I assume you're running on .NET Framework 1.x. Under 2.0 I get 98 in
both cases, which actually is correct even if it's not what you
expect.

Actually I am using 2.0.50727, and I have 3.0 installed. But my
understanding is 3.0 is just additional libraries... same runtime as
2.0.

I simply created a windows application project in VS 2005, and I'm
looking at the values in the debugger by dragging over them.

Should I be using the System.Convert routines? Some of what I am
doing involves large data cubes and I am concerned about performance.

Buzz
 
Actually I am using 2.0.50727, and I have 3.0 installed. But my
understanding is 3.0 is just additional libraries... same runtime as
2.0.

I simply created a windows application project in VS 2005, and I'm
looking at the values in the debugger by dragging over them.

Right, I see the same behavior now even on 2.0. Seems to be one of
those cases where release/debug (or rather optimized or not) code gen
gives different result.

Should I be using the System.Convert routines? Some of what I am
doing involves large data cubes and I am concerned about performance.

I guess that depends on what you want to accomplish. If you want to
round the result to the nearest integer you can try using Math.Round.


Mattias
 
I don't think that the Round method is appropriate here. While
performance is a concern, it seems that the OP really has a need for
accuracy when using floating point numbers. To that end, the OP should be
using the decimal type, and not a float:

short x = 10;
decimal y = 9.9M;

decimal a = x;
a *= y;
short b = (short) a; // I get 99. Good!

short c = (short) (x * y);

This will populate c with 99, as expected.
 
Can someone explain this:

void Test()
{
short x = 10;
float y = 9.9f;

float a = (float)x;
a *= y;
short b = (short)a; // I get 99. Good!

short c = (short)(((float)x) * y); // I get 98. Not so
good.
}

In both cases my logic is to cast the short to float, do floating
multiplication, and cast the result to short.

Why do I get different results?
Buzz

When you set a to the result of (float)x the CLR takes it's higher
precision floating point representation and casts it to the float
type, it seems this makes it 10 so that 9.9 * 10 = 99. In the second
part you never get the CLR a chance to cast back, so it takes it's
internal representation of (float)x (which must be something similar
to 9.999999999998) and mutilplies it to y so that 9.9 *
9.9999999999998 = 98.

We can examine the IL and see what's going on:


[0] int16 x,
[1] float32 y,
[2] float32 a,
[3] int16 b,
[4] int16 c

nop
ldc.i4.s 10
stloc.0

[short x = 10]

ldc.r4 9.9
stloc.1

[float y = 9.9]

ldloc.0
conv.r4
stloc.2 <-- Here's the implict cast

[float a = (float)x]

ldloc.2
ldloc.1
mul
stloc.2

[a = a * y] or [a = 10 * 9.9 or 99]

ldloc.2
conv.i2
stloc.3

[b = (int)a]

ldloc.0
conv.r4
ldloc.1 <-- no storing of the variable, in the next command we
multiple the resolve of the conv.4 with y
mul
conv.i2
stloc.s c



[c = (short)((float)x * y) or [ c = (short) (9.999999999998 * 9.9) or
98]


some nitty gritty if you're interested:
http://blogs.msdn.com/davidnotario/archive/2005/08/08/449092.aspx

You may also want to see http://msdn2.microsoft.com/en-us/library/364x0z75(VS.80).aspx
 
Nicholas Paldino said:
I don't think that the Round method is appropriate here. While
performance is a concern, it seems that the OP really has a need for
accuracy when using floating point numbers. To that end, the OP should be
using the decimal type, and not a float:

.... so long as he bears in mind that "accurate" here just means "able
to store decimal numbers to 28 places accurately. It's not like
float/double are inherently inaccurate and decimal is inherently
accurate - they just use different bases.

Trying to represent 1/3 in decimal will lead to the same kind of
inaccuracy as using float or double to represent 0.1.

I know you're aware of this - I'm just questioning you're use of the
word "accuracy". Certainly people often infer that more is going on
than is actually the case.
 
I would disagree slightly here.

Your two points are valid, but in this case, the inability of a
float/double type to faithfully represent a value with an exact
representation such as "0.99" (not the decimal representation of 1/3 which
is .33 with three repeating (can't do overbars)) to me comes across as
pretty inaccurate. =)
 
Nicholas Paldino said:
I would disagree slightly here.

Your two points are valid, but in this case, the inability of a
float/double type to faithfully represent a value with an exact
representation such as "0.99" (not the decimal representation of 1/3 which
is .33 with three repeating (can't do overbars)) to me comes across as
pretty inaccurate. =)

1/3 is only recurring in base 10. It's not in base 3 - in base 3, the
representation is just 0.1. Decimal's failure to represent 0.1 base 3
is exactly the same as float/double's failure to represent 0.1 base 10.
The OP happens to want to represent 0.99 (value) so for that
*particular* value decimal is exact and float/double aren't, but that
doesn't make it true of the types in general. Would the decimal type
*as a whole* become inaccurate if the OP had wanted to represent 1/3?

There's nothing particularly magical about base 10, it's just what
humans happen to usually use to write down numbers. Maths itself
doesn't care.

Saying that decimal is accurate and float/double are inaccurate
suggests there's a big difference in what they do - there isn't.
They're all floating point types (admittedly with significant
differences in choice of available mantissa/exponent sizes) it's just
that decimal uses a floating *decimal* point and float/double use a
floating *binary* point.

Saying decimal is accurate and float/double sometimes inaccurate for
values which are exactly representable in a decimal form is correct
(although I'd use the word exact rather than accurate) - saying decimal
is accurate as a blanket statement is dodgier, IMO.
 
Back
Top