Why does Math.Sqrt not take a decimal?

E

Ethan Strauss

Hi,
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.
Thanks!
Ethan

Ethan Strauss Ph.D.
Bioinformatics Scientist
Promega Corporation
2800 Woods Hollow Rd.
Madison, WI 53711
608-274-4330
800-356-9526
(e-mail address removed)
 
J

Jon Skeet [C# MVP]

Ethan Strauss said:
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.

Well, decimals are typically used when you're dealing with numbers
which are naturally and *exactly* represented as decimals - currency
being the most common example.

The kind of number you're likely to take the square root of is the kind
of number you should probably be using double for instead.
 
A

Arne Vajhøj

Ethan said:
Why does Math.Sqrt() only accept a double as a parameter? I would think
it would be just as happy with a decimal (or int, or float, or ....). I can
easily convert back and forth, but I am interested in what is going on behind
the scenes and if there is some aspect of decimals that keep them from being
used in this calculation.

Usually that type of functions returns a value of the same
type as itself.

Sqrt of a decimal will not return a true decimal (decimal is
supposed to be exact).

Sqrt of an int will definitely not return an int.

double Sqrt(decimal) and double Sqrt(int) will not follow
practice for functions

decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct

Besides: when do you need to take the square root of 87 dollars ??

:)

Arne
 
R

Rene

All of the variables used by the Math class are primitive types.

The CLR does not consider a Decimal to be a primitive type (the CLR does not
contain special IL instructions to handle decimal types). I would imagine
that is one of the reason…. maybe???

You can check out the definition of a Decimal and you will see that it
implements/overrides just about all the math operators on it including but
not limited to +,-,*,/,Round, Ceiling, Floor etc, many of this are also
included on the Math class but the Decimal type provides its own
implementation.
 
C

Cor Ligthert[MVP]

Ethan,

Why did you think they ever created a double (or value representations like
that with other names).

(As in the beginning all data was stored only binary in bytes and a while
later as so called binary decimals).

In my idea it was to do things like Math.Sqrt()

Cor
 
J

Jon Skeet [C# MVP]

Usually that type of functions returns a value of the same
type as itself.

Sqrt of a decimal will not return a true decimal (decimal is
supposed to be exact).

Decimal operations are exact/accurate in certain well-defined
circumstances.
There are plenty of operations on decimal which won't give exact
results:
1m / 3 for example. Likewise even addition - both 1e25 and 1e-25 are
exactly
representable as decimals, but their sum isn't.
Sqrt of an int will definitely not return an int.

Indeed, but then you wouldn't really want to.
double Sqrt(decimal) and double Sqrt(int) will not follow
practice for functions

decimal Sqrt(decimal) and
double Sqrt(int)
would be fine in my view, but the first is useless for typical
encouraged uses of decimal, and the second is already available
through an implicit conversion from int to double.
decimal Sqrt(decimal) and int Sqrt(int) will give an
impression about the result that is not correct

Does decimal division give you that impression as well? How about
integer division?
Besides: when do you need to take the square root of 87 dollars ??

And *that* is the real reason, IMO. The encouraged uses of decimal are
for the kinds of quantity one just doesn't take square roots of (like
money, as per your example).

Jon
 
J

Jon Skeet [C# MVP]

All of the variables used by the Math class are primitive types.

Round, Floor, Ceiling, Max, Min, Truncate, Sign and Abs all have
overloads which take decimals.
The CLR does not consider a Decimal to be a primitive type (the CLR does not
contain special IL instructions to handle decimal types). I would imagine
that is one of the reason…. maybe???

Well, doing a square root properly (as opposed to converting to
double, taking the square root and then converting back, which would
be a horrible way to go) would certainly be rather slower than when
using double due to the lack of hardware support. More importantly
though, it just wouldn't be useful for the intended uses of decimal.

Jon
 
R

Rene

Round, Floor, Ceiling, Max, Min, Truncate, Sign and Abs all have
overloads which take decimals.

Yes, but if you look at the implementation of these overloads, they are
nothing more that a wrapper to the Decimal class. For example: Math.Round
will internally call Decimal.Round
Well, doing a square root properly (as opposed to converting to
double, taking the square root and then converting back, which would
be a horrible way to go) would certainly be rather slower than when
using double due to the lack of hardware support. More importantly
though, it just wouldn't be useful for the intended uses of decimal.

Exactly, my point was that since there are no special IL instructions for
Decimals, getting the square root of a decimal using decimal context wouldn’t
be very efficient.
 
J

Jon Skeet [C# MVP]

Rene said:
Yes, but if you look at the implementation of these overloads, they are
nothing more that a wrapper to the Decimal class. For example: Math.Round
will internally call Decimal.Round

Sure, but that doesn't change my point at all. You claimed that all the
methods in Math took primitive types (a parameter is a variable, after
all), and the methods above are counterexamples.

The implementation should be irrelevant to the discussion - it could
just as easily have been the other way round, with Decimal.Round
calling Math.Round.
Exactly, my point was that since there are no special IL instructions for
Decimals, getting the square root of a decimal using decimal context wouldn=3Ft
be very efficient.

And *my* point was that efficiency isn't the main issue here. Just
because it wouldn't be efficient to take the square root doesn't make
it undesirable per se. It's the uses of decimal which make it
undesirable.
 
A

Arne Vajhøj

Jon said:
Decimal operations are exact/accurate in certain well-defined
circumstances.
There are plenty of operations on decimal which won't give exact
results:
1m / 3 for example.

True, but division would be missed if it was not there.

I would not have a problem if decimal division gave an exception
if the result was not exact, but that would probably be too
inefficient to test for that.
Likewise even addition - both 1e25 and 1e-25 are
exactly
representable as decimals, but their sum isn't.

Which is not good either.

But I see your point.

Decimal is not exact in other contexts either.

I love the concept of decimal, but I hate the implementation chosen.
Indeed, but then you wouldn't really want to.


decimal Sqrt(decimal) and
double Sqrt(int)
would be fine in my view, but the first is useless for typical
encouraged uses of decimal, and the second is already available
through an implicit conversion from int to double.


Does decimal division give you that impression as well?

It can.
How about
integer division?

No, integer division works just as expected. I prefer the Pascal style
with one operator for floating point division and another for integer
division to emphasize that it is two different operators.

Arne
 
J

Jon Skeet [C# MVP]

True, but division would be missed if it was not there.

And that's *exactly* my point. Division is a natural thing to want to
do on a decimal - there are times when you need to divide amounts that
are best represented as decimals, even if you might lose some
information.

Taking the square root of a decimal is *not* a natural thing to want
to do with the type of information represented as decimals, hence its
absence.
I would not have a problem if decimal division gave an exception
if the result was not exact, but that would probably be too
inefficient to test for that.

I'd have a massive problem with that, to be honest. I suspect that
there are many, many times when you don't mind decimal losing some
information, because you're going to round anyway. However, other
operations *do* need to be precise, and the input will typically
ensure that's the case anyway.
Which is not good either.

But I see your point.

Decimal is not exact in other contexts either.

I love the concept of decimal, but I hate the implementation chosen.

I'd like to see BigDecimal in the framework at some point, but decimal
has its advantages too (bounded space being the most obvious one).

Why? Surely it's a matter of common sense that decimal can't
accurately represent all rational numbers exactly.
No, integer division works just as expected.

So what's the difference? It's information loss either way, and should
be expected. Division is an inherently lossy operation in computing
unless you actually keep both operands. I don't see why losing
information in decimal is a problem, but losing information with
integers isn't.
I prefer the Pascal style
with one operator for floating point division and another for integer
division to emphasize that it is two different operators.

Occasionally that would be useful, but mostly I prefer the
consistency.

Jon
 
A

Arne Vajhøj

Jon said:
I'd have a massive problem with that, to be honest. I suspect that
there are many, many times when you don't mind decimal losing some
information, because you're going to round anyway. However, other
operations *do* need to be precise, and the input will typically
ensure that's the case anyway.

I guess I prefer either pure approximative or pure precise.
I'd like to see BigDecimal in the framework at some point, but decimal
has its advantages too (bounded space being the most obvious one).

There is usually always a pro and a con.
Why? Surely it's a matter of common sense that decimal can't
accurately represent all rational numbers exactly.

True, but it can give the results that does not follow
accounting practices.
So what's the difference? It's information loss either way, and should
be expected. Division is an inherently lossy operation in computing
unless you actually keep both operands. I don't see why losing
information in decimal is a problem, but losing information with
integers isn't.

Integer division is not loosing information.

One just need to remember that integer division is not
a "normal" division.

Decimal division tries do a normal division.
Occasionally that would be useful, but mostly I prefer the
consistency.

I look at it differently - I find it inconsistent to use the
same operator for two fundamentally different operations.

Arne
 
J

Jon Skeet [C# MVP]

Arne Vajhøj said:
I guess I prefer either pure approximative or pure precise.

You're not going to get that with any type.

Of course, decimal, float and double are in many ways just as
"precise" as int. All the numeric types represent exact values, and all
are lossy in some situations. One difference with float/double is that
you can use literals which aren't exact values in that type, and the
literal is converted with a lossy approximation.
True, but it can give the results that does not follow
accounting practices.

And again, that's likely to be pretty much unavoidable in extreme
cases. I'm reasonably satisfied with decimal - you just need to
understand it and its limitations.
Integer division is not loosing information.

Yes it is. Suppose you have two numbers of a type, x and y.

If division does not lose information, then given y and x/y, you can
retrieve y. That doesn't hold for integers (or any other type).
One just need to remember that integer division is not
a "normal" division.

So you're redefining division in order to avoid saying that it's lossy
in the case of integers?
Decimal division tries do a normal division.

I prefer to have one definition for division and information loss, and
make it consistent. The exact way in which information is lost varies,
but the fact of information being lost is the same both ways.
I look at it differently - I find it inconsistent to use the
same operator for two fundamentally different operations.

Whereas I see them as fundamentally the same operation, with different
models of data loss.
 
A

Arne Vajhøj

Jon said:
Yes it is. Suppose you have two numbers of a type, x and y.

If division does not lose information, then given y and x/y, you can
retrieve y. That doesn't hold for integers (or any other type).


So you're redefining division in order to avoid saying that it's lossy
in the case of integers?

I have not invented the division between integer and floating point
devision. It was invented before I was born.

And I believe that it is based in math:

Computer integers is a non lossy implementation of math
integers (except due to limited range) while computer
floating point is a lossy implementation (not just due to
limited range) of math reals.

Integer division is a fundamental operation not just a
lossy division.

Arne
 
J

Jon Skeet [C# MVP]

Arne Vajhøj said:
I have not invented the division between integer and floating point
devision. It was invented before I was born.

In that case I think you're redefining lossy :)

More seriously, I'm sure we both understand each other and agree on the
technicalities - but we treat information loss slightly differently.

I view it like the difference between a hash and encryption:

(key, plaintext) => (key, encrypted) under encryption
(key, encrypted) => (key, plaintext) under decryption - no information
loss


(key, plaintext) => (key, hash) under hashing
(key, plaintext) => ? - there's no inverse of hash


Now, although there *is* an intuitive inverse of division
(multiplication) it's not a "full" inverse (not trying to claim that as
a technical term!) in that given (x, x / y) you can't get back to y (or
given (x, y / x) you can't get back to y). I think of that as
information loss.
And I believe that it is based in math:

Computer integers is a non lossy implementation of math
integers (except due to limited range) while computer
floating point is a lossy implementation (not just due to
limited range) of math reals.

Both are subsets of a broader set. I'll certainly agree that the nature
of the subset is easier to understand for integers - but I'd also say
that with floating point (which includes decimal, btw) there's a well-
defined range of values which can be exactly represented. A given bit
pattern represents an exact real value - it's just that, unlike with
the integers, just because x < y, x and y being exactly representable
doesn't mean that all the members of the "larger" set between x and y
are exactly representable.
Integer division is a fundamental operation not just a
lossy division.

It's a fundamental operation which loses information.

Of course, now that you mention the range issue, addition is also lossy
in one way: if I start with x, add 1 a number of times (y) you can't
tell me afterwards the size of y - only the size of y mod 2^32 (or
whatever, depending on the type) :)

I think we may have drifter a little from the original topic by now,
mind you...
 
A

Arne Vajhøj

Jon said:
In that case I think you're redefining lossy :)

More seriously, I'm sure we both understand each other and agree on the
technicalities - but we treat information loss slightly differently.

And integer math.
Now, although there *is* an intuitive inverse of division
(multiplication) it's not a "full" inverse (not trying to claim that as
a technical term!) in that given (x, x / y) you can't get back to y (or
given (x, y / x) you can't get back to y). I think of that as
information loss.

I see it as integer division and integer multiplication not
being inverse.

Modulus/remainder exist due to that.

But again this is not a difference in understanding of how
it works just a difference in the English terms we use to
label it.
It's a fundamental operation which loses information.

Of course, now that you mention the range issue, addition is also lossy
in one way: if I start with x, add 1 a number of times (y) you can't
tell me afterwards the size of y - only the size of y mod 2^32 (or
whatever, depending on the type) :)

Depends on checked switch.

:)

But yes this is a problem for all fixed size data types trying
to represent an infinite math set.

BTW, I believe that C# will be the last major language to use
fixed size integers. I am expecting the language being invented
in the next decade to hide that type of implementation detail from
the programmers.
I think we may have drifter a little from the original topic by now,
mind you...

A lot. But it is not the first time that has happen on usenet.

Arne
 
J

Jon Skeet [C# MVP]

Arne Vajhøj said:
And integer math.

Fair enough :)
I see it as integer division and integer multiplication not
being inverse.

Modulus/remainder exist due to that.

Right - so integer division has no inverse operation, which makes it a
lossy operation in my view. But it's fine for us to disagree on that.
But again this is not a difference in understanding of how
it works just a difference in the English terms we use to
label it.

Yup, fair enough.
Depends on checked switch.

:)

Nice, had forgotten that.
But yes this is a problem for all fixed size data types trying
to represent an infinite math set.

BTW, I believe that C# will be the last major language to use
fixed size integers. I am expecting the language being invented
in the next decade to hide that type of implementation detail from
the programmers.

I wouldn't go that far. I think C# *may* be the last major language not
to have an "arbitrary length" integer type (such as BigInteger in Java)
with built-in language support. I think we'll be using fixed size
integers for many things for a long time though.
A lot. But it is not the first time that has happen on usenet.

I'm shocked! ;)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top