G
Guest
Hello! I was working on some code the other day, and I came across an odd
discrepancy between the decimal and the double type.
If I attempt to divide a decimal by zero, the framework throws an error.
If I attempt to divide a double by zero, the framework returns infinity.
From a mathematical standpoint, it would seem to me that the decimal handles
this division correctly, while the double's handling of this is flawed.
Is there a practical reason why these two types handle this so differently?
Thanks in advance!
Mike
discrepancy between the decimal and the double type.
If I attempt to divide a decimal by zero, the framework throws an error.
If I attempt to divide a double by zero, the framework returns infinity.
From a mathematical standpoint, it would seem to me that the decimal handles
this division correctly, while the double's handling of this is flawed.
Is there a practical reason why these two types handle this so differently?
Thanks in advance!
Mike