float vs double

  • Thread starter Thread starter guy
  • Start date Start date
G

guy

Can someone point me to page that gives a decision tree on deciding when to
use float or double in a C# application including performance impacts of
using one over the other. And when is decimal used?

Thanks
 
guy said:
Can someone point me to page that gives a decision tree on deciding when to
use float or double in a C# application including performance impacts of
using one over the other.

I believe there isn't much performance impact at all (in terms of
speed) in using double instead of float. However, there is the memory
impact - if you're using an awful lot of floating point numbers and you
have enough accuracy in float (6-7 significant figures) then use float,
but by default I'd use decimal.
And when is decimal used?

When the sample data is accurate in decimal form (eg with currency) and
needs to stay that way. There's a significant performance difference
between float/double and decimal.
 
Jon,

I award you twice:
1. fastest response time
2. most comprehensive answers

Many thanks and this brings me to another question:

Yes - this application is 100% financial (futures market data) so rarely
needs accuracy beyond about 4 decimal places.

You said: "There's a significant performance difference between float/double
and decimal."

You didn't say if the performance was improved or degraded by using decimal
instead of float/double.

Thanks
Guy
 
guy said:
I award you twice:
1. fastest response time
2. most comprehensive answers

LOL :)
Many thanks and this brings me to another question:

Yes - this application is 100% financial (futures market data) so rarely
needs accuracy beyond about 4 decimal places.

Right. Bear in mind, however, that as soon as you've performed a few
operations on the data, the accuracy will have decreased. For financial
applications, I'd suggest either using Decimal or using scaled integers
with the appropriate business rounding rules etc.
You said: "There's a significant performance difference between float/double
and decimal."

You didn't say if the performance was improved or degraded by using decimal
instead of float/double.

Sorry - degraded. Float/double is performed in hardware, but decimal
needs to be done in software, effectively.
 
guy said:
I've also found this link which shows the performance between
float/double/decimal:
http://www.geocities.com/herong_yang/cs_b/performance.html

Hmm... the fact that he's doing stuff with variables passed by
reference is a bad first sign. That changes the performance
characteristics, and isn't typical of real use.

The accuracy "problem" with decimal is (I believe - I haven't done a
detailed analysis of the program) that it doesn't cope with very small
numbers as well as double. (Have a look into how decimals and doubles
are stored for more information about this.)
 
Back
Top