why float

M

Martijn Mulder

Why is there a type float in C#?

Working with integers is best, of course. It is like carrying pebbles around
in your pocket. But if you need decimal floating point values, in graphics
transforms or in scientific programs, you take the highest precision,
double.

I can understand that languages like C and C++ have two different floating
point types, float and double, because once memory size was an issue. But
these days it is not and build-in co-processors do computations on them
quick enough.

If you can define a language from scratch, why would you make 'float'
(single precision) a fundamental type?
 
L

Lucian Wischik

Martijn Mulder said:
If you can define a language from scratch, why would you make 'float'
(single precision) a fundamental type?

Most of GDI+ uses floats, it seems; not doubles. And 3d graphics
likewise. My guess is that it's because float is 32bit. That makes it
tremendously compelling for graphics.
 
B

Bruce Wood

Waaaay back in my university days, in my Numerical Analysis course, I
remember one example in which using doubles everywhere (the obvious
solution) actually increased error and gave a less precise answer than
judicious application of floats in the right places.

Don't ask me to remember the precise example. That was a long time ago.

So, yes, even when memory is not an issue floats have their uses...
mostly for engineering applications, I would imagine.
 
L

Lebesgue

And how about performance? Aren't arithemetical operations with floats much
faster than with doubles?
 
J

Jon Skeet [C# MVP]

Lebesgue said:
And how about performance? Aren't arithemetical operations with floats much
faster than with doubles?

I did some tests with this a while back, and I think I found that they
were the same, or in some cases floats were slower. Modern CPUs are
geared up for 64-bit (and 80-bit, most natively) floating point
operations, so other operations involve converting at the start and the
end, I believe. Having said that, I'm far from an expert on this - it's
worth running some tests yourself.
 
A

Alun Harford

Jon Skeet said:
I did some tests with this a while back, and I think I found that they
were the same, or in some cases floats were slower. Modern CPUs are
geared up for 64-bit (and 80-bit, most natively) floating point
operations, so other operations involve converting at the start and the
end, I believe. Having said that, I'm far from an expert on this - it's
worth running some tests yourself.

32-bit floats mean pulling half of the amount of data from memory, which is
very nice - particularly if those values aren't already stored in the cache.
If the matrix that you're working with doesn't quite fit in the cache with
doubles, you'll *really* notice the difference!
Another major advantage (although less so these days) is that SSE only
supports 32-bit floats - and if you want to do floating point remotely
quickly on a modern chip, that's the way to do it (although SSE2 supports
64-bit too, it's only avaliable on very modern chips P4s and the AMD K8s)

Alun Harford
 
B

Barry Kelly

Lebesgue said:
And how about performance? Aren't arithemetical operations with floats much
faster than with doubles?

You'd have to test using C or some other unmanaged language and change the
control word on the FPU. Using floats in the CLR will end up doing
expensive truncation operations for loads and saves, IIRC.

-- Barry
 
M

Martijn Mulder

Waaaay back in my university days, in my Numerical Analysis course, I
remember one example in which using doubles everywhere (the obvious
solution) actually increased error and gave a less precise answer than
judicious application of floats in the right places.
<snip>

That is unlikely. A double *is* a float with more precision. So it will
always be as precise or more precise than a flaot is. Answers given in the
line of "32 bit is favored" are platform-dependenpt and not realy
convincing. Why is there a fundamental type 'float' and a fundamental type
'double' that both do the same thing? When to use the one, when to use the
other?
 
N

Nick Hounsome

Martijn Mulder said:
<snip>

That is unlikely. A double *is* a float with more precision. So it will
always be as precise or more precise than a flaot is. Answers given in the
line of "32 bit is favored" are platform-dependenpt and not realy
convincing. Why is there a fundamental type 'float' and a fundamental type
'double' that both do the same thing? When to use the one, when to use the
other?

What he is saying rings a bell with me too.

Unfortunately I failed my numerical analysis block so I can't add an
authoritative explanation either :(

It is possibly to do with spurious precision in the doubles - in numerical
analysis the inputs are usualy measurements approximating the true values
and do not have the number of significant digits in a double so when you
start using these extra digits you can introduce more error than if you
stuck with float ... or not. I think it depends on teh algorithm as well.

It's not realy an argument for float though because you can acheive the same
or better results by using double and reworking the algorithm and/or
rounding here and there.
 
B

Bruce Wood

Nick said:
What he is saying rings a bell with me too.

Unfortunately I failed my numerical analysis block so I can't add an
authoritative explanation either :(

It is possibly to do with spurious precision in the doubles - in numerical
analysis the inputs are usualy measurements approximating the true values
and do not have the number of significant digits in a double so when you
start using these extra digits you can introduce more error than if you
stuck with float ... or not. I think it depends on teh algorithm as well.

Yes, that's more or less what I remember, too.
It's not realy an argument for float though because you can acheive the same
or better results by using double and reworking the algorithm and/or
rounding here and there.

Agreed. It was just that certain algorithms required judicious use of
single-precision in order to stop rounding errors from amplifying
themselves. I remember it only because it was counter-intuitive.
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Jon said:
I did some tests with this a while back, and I think I found that they
were the same, or in some cases floats were slower. Modern CPUs are
geared up for 64-bit (and 80-bit, most natively) floating point
operations, so other operations involve converting at the start and the
end, I believe. Having said that, I'm far from an expert on this - it's
worth running some tests yourself.

The FPU in modern processors only handles two data types, int (32/64?)
and double, so when you put a float value on the FPU stack it is
converted to a double. With that in mind it would be quite surprising if
there would be any huge difference in performance between floats and
doubles.

A single calculation would be faster when using double, as no conversion
is needed. If a lot of calculations is done on a large number of floats
or doubles, using floats might be slightly faster when memory bandwith
comes into play.
 
L

Lucian Wischik

Göran Andersson said:
The FPU in modern processors only handles two data types, int (32/64?)
and double

Is that just for the FPUs on-board the CPU, or also for the FPUs
on-board the graphics accelerator?
 
B

Barry Kelly

Göran Andersson said:
The FPU in modern processors only handles two data types, int (32/64?)

FPU on x86 (i.e. the x87) handles three types: 32-bit, 64-bit and
80-bit. All calculations on the FPU stack are 80-bit by hardware
"default", although the MS C++ RTL and hence the CLR sets the mantissa
significant digits to the same as doubles, i.e. 64-bit IEEE floats.
and double, so when you put a float value on the FPU stack it is
converted to a double. With that in mind it would be quite surprising if
there would be any huge difference in performance between floats and
doubles.

Exactly. You need to change the FPU control word to change its default
precision.

More information here:

http://blogs.msdn.com/davidnotario/archive/2005/08/08/449092.aspx

-- Barry
 
B

Bruce Wood

It's not really an argument for float though because you can acheive the same or better results by using double and reworking the algorithm and/or rounding here and there.

Well, it's not an argument for why we _need_ floats, but the OP's
question was "why even bother with them"? It is, however, a
demonstration that they can be useful.
 
R

Rene

What about databases? You don't want to use a double on a database if you
only need a float so that you don't use the extra space.

Many programs will retrieve the data from a database and in my opinion, I
much rather retrieve that float value from the database and store it on a
float value on my program. I think it would be more self documented and I
will be able to catch errors like overflow right away rather that at the
point when I need to cast my double to a float to put it back to the
database.

Just a thought
 
N

Nick Hounsome

Rene said:
What about databases? You don't want to use a double on a database if you
only need a float so that you don't use the extra space.

For example?

The space is extremely unlikely to be a problem in my experience and all the
examples that I can think of that would use huge numbers of floats would
also want the precision of double.

I don't recall ever coming across a situation in which I needed to store a
float in a database.
Many programs will retrieve the data from a database and in my opinion, I
much rather retrieve that float value from the database and store it on a
float value on my program. I think it would be more self documented and I
will be able to catch errors like overflow right away rather that at the
point when I need to cast my double to a float to put it back to the
database.

But you will never get overflow.
Firstly really large doubles will just become infinity.
Secondly the range of float is so large that it is rarely a problem.
Thirdly loss of precision doesn't normally give an error.
 
L

Lucian Wischik

Göran Andersson said:
The FPU in modern processors only handles two data types, int (32/64?)
and double, so when you put a float value on the FPU stack it is
converted to a double. With that in mind it would be quite surprising if
there would be any huge difference in performance between floats and
doubles.

I've just been reading the specs for the Cell BE processor (to be used
in PS3). I'm sure this wasn't a target platform that MS had in mind
when they made C#... but its floating point units operate on either 4
32-bit floats simultaneously, or 2 64-bit floats simultaneously.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top