B
Beamguy
I am writing a numerical program and am pondering which precision to use for floating
point numbers. A bunch of years back there was a class of computers where it became
faster to do floating point arithmetic in double precision, since the hardware floating
point was double precision. I am not sure about PC's. Are hardware floating point
units on PC's double or single precision these days, and based on this which is faster
to use?
Thanks.
point numbers. A bunch of years back there was a class of computers where it became
faster to do floating point arithmetic in double precision, since the hardware floating
point was double precision. I am not sure about PC's. Are hardware floating point
units on PC's double or single precision these days, and based on this which is faster
to use?
Thanks.