It has been discussed many times over the years in the
math & science groups that the proper interpretation of -10^2 is -100.
Almost all real math programs that I know return -10^2 as -100
In the first "real math program"ming language, IBM Mathematical
FORmula TRANslating System (FORTRAN), unary minus has precedence over
exponentiation.
I think one of the main problems is that Excel's interpreter can only
read Left to Right. It is not sophisticated enough to read Right to Left.
Precedence is not determined by the order in which an expression is
"read" (parsed). Operators associate either left or right; and any
parser can do either. For example, Excel has no problem interpreting
1 + 2 * 3 as 1 + (2*3). Likewise, Excel could just as easily
interpret - 10 ^ 2 as -(10^2) if it wanted to.
It is simply the choice of the MS Excel designers that unary minus has
higher precedence than exponentiation. Actually, it is probably the
designers of Visicalc who made that decision; the designers of Lotus
followed suit for compatibility reasons; and the Excel designers
followed Lotus for the same reason.
Excel's Help system doesn't help much when it says
- Negation (as in -1)
Negation is a term used with Logical values. Ie Not(True) = False
That term is also used in binary to switch 1 / 0.
Don't confuse a tech writer's use of terminology with the canonical
use. In the good ol' days, computer tech writers were experts in the
disciplines that they wrote about. Nowadays, the tech writer is
usually an English major who often has no in-depth knowledge of --
sometimes not even much experience with -- the subject. I have first-
hand knowledge of that fact.
I do not believe that "negation" or "complement" is used exclusively
in one discipline or the other. I don't know of any engineer who
describes the signal x-bar (that is, x with a line over it) as "the
negation of x"; it is the complement (or inverse) of x. On the
flipside, mathematicians universally refer to -x as "negative x". So
I can understand why someone might call the "-" symbol in that context
as "negation".
On the other hand, I would never refer to the binary "-" and "+" as
"subtraction" and "addition". Those are their operation; but the
symbols are "minus" and "plus". Similarly, I call the unary "-"
simply "unary minus".
For example, suppose you were following a math book, and entered
=2^3^4.
We know that Excel can only read left to right, and should return the
wrong answer of 4096.
[....]
However, most real math programs will read this Right to left, and return
the mathematically correct answer:
2^3^4
2,417,851,639,229,258,349,412,352
In a math text, you would find 2 (superscript) 3 (superscript) 4,
which is unambiguous. If you interpreted that as 2^3^4 instead of
2^(3^4), the error would be yours, not Excel's.
It is the same with any translation between two languages -- say
English and Chinese. If the translated text does not match the
intended meaning of the original text, the error is in the
translation. No one would say that one or the other language is
"wrong".
What you do not seem to grasp is the difference between ambiguous and
unambiguous representations. Computer language representation of
mathematical formulas -- at least what we are discussing here -- is
inherently ambiguous. We rely on rules and special syntax (e.g.
parentheses) to resolve those ambiguities. No one set of rules is
right or wrong.