M
Martijn Mulder
Why is there a type float in C#?
Working with integers is best, of course. It is like carrying pebbles around
in your pocket. But if you need decimal floating point values, in graphics
transforms or in scientific programs, you take the highest precision,
double.
I can understand that languages like C and C++ have two different floating
point types, float and double, because once memory size was an issue. But
these days it is not and build-in co-processors do computations on them
quick enough.
If you can define a language from scratch, why would you make 'float'
(single precision) a fundamental type?
Working with integers is best, of course. It is like carrying pebbles around
in your pocket. But if you need decimal floating point values, in graphics
transforms or in scientific programs, you take the highest precision,
double.
I can understand that languages like C and C++ have two different floating
point types, float and double, because once memory size was an issue. But
these days it is not and build-in co-processors do computations on them
quick enough.
If you can define a language from scratch, why would you make 'float'
(single precision) a fundamental type?