I don't think that you can talk about decimal or double being more or
less accurate one than the other. They're very different
representations. My point was not that decimal is less accurate than
double, or vice versa, but that they represent values differently and
so conversions between the two may lose information.
To answer your question, the C# language spec states that decimal is a
128-bit value, so its precision is limited. In particular, with
decimal, the larger the value you try to store the fewer digits you
have after the decimal place. This is not the case with double.
Double, however, has less precision overall than decimal. A double is
only a 64-bit value, so it runs out of digits of precision much more
quickly. However, you can represent huge values and still have the same
number of digits of precision as with values near 1.
For example, I concocted a little sample program that demonstrates some
double-to-decimal loss. I had to use a value with lots of decimal
places. My statements about 1.1, in the previous post, were of course
not directly related to that value. Both decimal and double can
represent 1.1 quite nicely, and there is no loss between the two.
Instead, I used (what I remember as) PI. My apologies to mathematicians
if my memory has faded over the years. (Yes, I was too lazy to look it
up.
public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
double d = 3.141592653589793238462643383279;
decimal de = (decimal)d;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}
The output from this is:
The decimal value is 3.14159265358979, PI is
3.1415926535897932384626433833
As you can see the double converted to decimal lost a bunch of
precision off the end of the value. So, I ran another test:
public static void Main(string[] argc)
{
decimal pi = 3.141592653589793238462643383279M;
decimal de = (decimal)3.141592653589793238462643383279;
Console.WriteLine(String.Format("The decimal value is {0}, PI is {1}",
de, pi));
}
The results were, predictably, identical. On the line that starts
"decimal de =", the compiler first converts the 3.14159... value to a
double format, since it has no "M" suffix. The cast then converts the
double value to a decimal format and stores it in de. However, in
converting the literal to a double, a lot of precision was lost.
Again, this won't matter except for values with a lot of precision, or
very large values.