Excel's basic arithmetic, trig functions, etc. appear to be handled by the
CPU, as you would expect since otherwise MS would have re-invented the wheel.
As such, the math is no better or worse than any other software package
(excepting arbitrary precision packages like Maple and Mathematica). The
internal representation of numbers follows the IEEE 754 standard for double
precision binary storage; again no better or worse than any other standard
program.
Where calculations were not available in the CPU (probability, statistics,
engineering functions [like Bessel and erf functions], etc. then the picture
changes. MS made a number of poor algorithm choices.
Prior to 2003, probability functions were generally reliable to about 6
decimal places (not 6 significant figures), provided they returned a value at
all. For simple statistical tests, that is usually adequate, and better than
you would find in most printed tables. The real criticism was that the
working range was so limited that you often couldn't get results for real
world problems, although it was fine for textbook examples. In Excel 2003,
MS "improved" their algorithms for discrete probability distributions, with
the result that they now have a much larger range where they return values,
but the value that they return may be totally wrong! For probability
calculations, download Ian Smith's library of VBA functions from
http://members.aol.com/iandjmsmith/examples.xls
Excel's engineering functions have modest accuracy and severe limitations on
their working range. Again, they may be adequate for (some) textbook
examples, but not for real-world applications.
Statistical calculations prior to 2003 were mathematically correct but
numerically unstable, meaning that they could fail for numerically difficult
problems. Excel 2003 uses algorithms that are quite good, with the exception
of a bug in LINEST such that coefficient estimates that are exactly zero are
not to be trusted (fixed in 2007). Oddly, the polynomial trendline for XY
(Scatter) charts has always used an excellent algorithm that is comparable
(and in some surprising cases much better) in accuracy than main-stream
statistics packages. However, Excel is not (and likely never will be) a
statistics package; if you need to do serious statistical work (or even a lot
of simple statistical work), and you don't have SAS, JMP, or other
main-stream statistics package, then you may be better served to download and
learn R
www.r-project.org
The biggest question with Excel, like any software package, is how you know
that you have actually asked it to do what you intended (validation).
Several papers have been published about the rate of user errors in
spreadsheets. It is not clear to me that the problem is inherently greater
for spreadsheets; it is just that they are so easy to use that they have many
users who have never considered the issue.
Jerry
Clarkinson said:
Hi,
We had this guy in the office today making out that it was easy to find
faults in Computer software.
He claimed it was easy to show and prove that Excel calculates eroneously.
He didn't specify how or under what cirmstances and didn't have time to do a
demonstration.
Anyone else heard of this?
I check "Excel makes mistakes" on Google and found a few articles dated 2003
dealing with a statistical covariance problem .
Apart from that, nothing much. Is there something I've been missing ?
TIA,
EJC