(decimal) 1.1 versus 1.1M

  • Thread starter Thread starter Paul Sullivan
  • Start date Start date
P

Paul Sullivan

decimal d;

d = 1.1M

OR

d= (decimal) 1.1

Discussioon

That 'M' suffix looks like something ledt over from C. It is not
self-evident what it is. I heard one lecturer say it means money.
WRONG.

d = (decimal) 1.1 is self documenting.

Is there some standard that indicates which is the best way to code??

Paul S.
 
1.1M says that the 1.1 is a decimal value.
Similar to 1.1f (float value). Thus no casting is required to store it in a
decimal variable.

Just 1.1 - I believe is a double value, which has to be casted to decimal.
 
I haven't checked but I would hope that the optimizer would produce the same
code for both. If it doesn't, then the expression
d = 1.1M;
will produce more efficient code, since the constant will be stored as a
decimal.

Using
d = (decimal) 1.1;
will cause the constant to be stored as a double and converted to decimal
when it is used.

Like I said: I haven't checked if the optimizer takes care of this or not.
Hope so. If it does, then there is no effective difference.

--
--- Nick Malik [Microsoft]
MCSD, CFPS, Certified Scrummaster
http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not
representative of my employer.
I do not answer questions on behalf of my employer. I'm just a
programmer helping programmers.
 
I would think that the code would be different, depending upon whether
1.1 can be exactly represented in a double. Remember that

double d = 1.1;

does not guarantee that d will equal exactly 1.1. It will equal the
closest approximation available in the floating point representation
for the value 1.1. I would think that this would then mean that

decimal e = (decimal)1.1;

would mean that e should be set to the best approximation of the double
value, which is the best approximation of 1.1 in double format. In
other words

decimal e = (decimal)1.1;
if (e == 1.1M) ...

does not, in my opinion, guarantee that the target of the "if"
statement will execute.

Of course, I'm open to being shown wrong. :)
 
Okay, I have understood (or perhaps have been laboring under a
misconception) that decimal was, at the least, far superior to double
regarding representing all possible values without any sort of precision
errors like you describe. If not in fact perfect. In fact I assumed
decimal was just a BCD implementation, sort of like a bigint with an implied
decimal. Takes up more memory, calculatuions are slower, but you can count
on it being correct.

Can someone definitively point me to info that says I'm right or wrong about
this? I don't remember where I picked it up to be honest.

--Bob
 
Nick Malik said:
I haven't checked but I would hope that the optimizer would produce the same
code for both. If it doesn't, then the expression
d = 1.1M;
will produce more efficient code, since the constant will be stored as a
decimal.

Using
d = (decimal) 1.1;
will cause the constant to be stored as a double and converted to decimal
when it is used.

I'd have thought that too, but using ildasm shows that the C# compiler
(rather than the JITter) converts it to a decimal at compile-time.

Compile the following code:

class Test
{
static void Main()
{
decimal d1 = (decimal)1.1;
decimal d2 = 1.1m;
}
}

and then run ildasm on it. Both decimals are loaded using the following
code:

IL_0000: ldc.i4.s 11
IL_0002: ldc.i4.0
IL_0003: ldc.i4.0
IL_0004: ldc.i4.0
IL_0005: ldc.i4.1
IL_0006: newobj instance void [mscorlib]System.Decimal::.ctor
(int32, int32, int32, bool, unsigned int8)

This surprises me somewhat - I haven't gone into whether or not it can
make any semantic difference, but I wouldn't be surprised if it did.
 
I don't think that you can talk about decimal or double being more or
less accurate one than the other. They're very different
representations. My point was not that decimal is less accurate than
double, or vice versa, but that they represent values differently and
so conversions between the two may lose information.

To answer your question, the C# language spec states that decimal is a
128-bit value, so its precision is limited. In particular, with
decimal, the larger the value you try to store the fewer digits you
have after the decimal place. This is not the case with double.

Double, however, has less precision overall than decimal. A double is
only a 64-bit value, so it runs out of digits of precision much more
quickly. However, you can represent huge values and still have the same
number of digits of precision as with values near 1.

For example, I concocted a little sample program that demonstrates some
double-to-decimal loss. I had to use a value with lots of decimal
places. My statements about 1.1, in the previous post, were of course
not directly related to that value. Both decimal and double can
represent 1.1 quite nicely, and there is no loss between the two.
Instead, I used (what I remember as) PI. My apologies to mathematicians
if my memory has faded over the years. (Yes, I was too lazy to look it
up. :)

public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
double d = 3.141592653589793238462643383279;
decimal de = (decimal)d;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}

The output from this is:

The decimal value is 3.14159265358979, PI is
3.1415926535897932384626433833

As you can see the double converted to decimal lost a bunch of
precision off the end of the value. So, I ran another test:

public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
decimal de = (decimal)3.141592653589793238462643383279;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}

The results were, predictably, identical. On the line that starts
"decimal de =", the compiler first converts the 3.14159... value to a
double format, since it has no "M" suffix. The cast then converts the
double value to a decimal format and stores it in de. However, in
converting the literal to a double, a lot of precision was lost.

Again, this won't matter except for values with a lot of precision, or
very large values.
 
I think I can explain this. I ran ildasm on my second sample program.
The results look like this:

IL_0000: ldc.i4 0x41b65f29
IL_0005: ldc.i4 0xb143885
IL_000a: ldc.i4 0x6582a536
IL_000f: ldc.i4.0
IL_0010: ldc.i4.s 28
IL_0012: newobj instance void
[mscorlib]System.Decimal::.ctor(int32,

int32,

int32,

bool,

unsigned int8)
IL_0017: stloc.0
IL_0018: ldc.i4 0xe76a2483
IL_001d: ldc.i4 0x11db9
IL_0022: ldc.i4.0
IL_0023: ldc.i4.0
IL_0024: ldc.i4.s 14
IL_0026: newobj instance void
[mscorlib]System.Decimal::.ctor(int32,

int32,

int32,

bool,

unsigned int8)

Notice that in both cases the compiler uses the decimal constructor,
but it passes two different values into the constructor. The second
value is obviously truncated.

This means that it must be the compiler that converts the literal to a
double and then converts that double to a decimal, in order to get the
initial value for the decimal "de" in my code. This is a logical
optimization, since the compiler is quite capable of doing those
conversions itself rather than leaving them to the runtime.

In your case, Jon, all that happened was that 1.1M and 1.1 converted
from double to decimal yielded the same bit patterns.
 
Bruce Wood said:
I don't think that you can talk about decimal or double being more or
less accurate one than the other. They're very different
representations. My point was not that decimal is less accurate than
double, or vice versa, but that they represent values differently and
so conversions between the two may lose information.

I agree that they're different, but not quite as different as you seem
to think.

Thinking about it a bit, I *suspect* that all doubles within the
decimal range can be exactly represented with a decimal (due to the
base of decimal including the base of double as a factor), but I'd need
to go through the maths to check.
To answer your question, the C# language spec states that decimal is a
128-bit value, so its precision is limited. In particular, with
decimal, the larger the value you try to store the fewer digits you
have after the decimal place. This is not the case with double.

Yes it is - if you store a very large number in a double, you'll get
very little precision in absolute terms. When you get to *really* large
numbers, you don't even get *integer* precision.
Double, however, has less precision overall than decimal. A double is
only a 64-bit value, so it runs out of digits of precision much more
quickly. However, you can represent huge values and still have the same
number of digits of precision as with values near 1.

Same number of digits of precision, but not the same number of digits
*after the decimal place*. Big difference. (Decimal still has the same
number of digits of precision with large numbers as with small numbers
too.)

The same is true for decimal though - it always has 28/29 digits of
precision, however large the number is. Double will always have 15/16
digits of precision (IIRC - around that, anyway).

This shouldn't be surprising, as the size of the mantissa stays the
same throughout the range - 52 bits for double, 96 bits for decimal.
(Normalisation gives double an extra implicit bit of precision for most
double values, but that's a bit of a side issue.)

The big difference between the two types is the range of exponents
which are available - decimal keeps the decimal point within the
integer represented by the mantissa, or *just* to one end of it. Double
allows it to be miles away (in either direction), letting you represent
much bigger and much smaller numbers - but with that smaller mantissa.
(There's no particular reason why decimal couldn't represent exponents
with 7 full bits, rather than just most of 5 bits, but it's probably
not appropriate for most uses of decimal.)
 
Yes it is - if you store a very large number in a double, you'll get
very little precision in absolute terms. When you get to *really* large
numbers, you don't even get *integer* precision.

Yes, of course you're right. Monday sluggishness in the grey cells, The
problem is more in the range of values than in the precision of said
values: double has a greater range but less precision; decimal has
greater precision but a smaller range, as you pointed out.
 
The bottom line is, I think, that one should use the "M" suffix. The
compiler even suggests it when you try to say

decimal d = 1.1;

Although there is no difference in the amount of code generated,
casting to decimal may have unforseen consequences and cause subtle
bugs in your code. In addition to the truncation illustrated in my
previous post, consider the following:

decimal x = (decimal)2.54e-100;
decimal y = 2.54e-100M;

The first line results in no errors and no warnings. Check out the code
it generates:

IL_0074: ldc.i4.0
IL_0075: newobj instance void
[mscorlib]System.Decimal::.ctor(int32)
IL_007a: stloc.s x

as you can see, the compiler cheerfully loads zero into the decimal
variable "x" without telling you anything's wrong.

The second line, on the other hand, won't compile, so you know
something's wrong.
 
Back
Top