(e-mail address removed) wrote...
yes, I do expect you to believe that.
??!
SQL Server on commodity hardware is always a better price-performance
proposition than being tied into one mainframe vendor.
mySQL or PostgreSQL running under Linux or [Free/Net]BSD on commodity
hardware is even better price-performance, which is why it's dominant
on the web. You're also ignoring (or ignorant of) sunk costs. Companies
that already have mainframes (owned or leased) and a stock of tested
mainframe software would find the price-performance of sticking with
existing procedures much better than change for change's sake alone.
i'm sorry that IBM wants you to change the OS every other year; I think that
is kindof comical
Change the OS? The mainframes I have worked with and still work with
have all run MVS (I also had a brief stint using CMS under VM). Are you
confusing MVS version updates with changing OSs? And Microsoft doesn't
expect the same thing to happen each time they release a new Windows
version? Or are you referring to mainframe versions of Linux, which are
almost always run under VM (which has been around for over two decades)
along with MVS and even DOS/360 for some real legacy stuff? If so,
Linux on mainframes is a replacement, and under VM runs along side
preexisting OSs.
So adding proof of your ignorance of mainframes to your demonstrated
ignorance of supercomputers and clusters. Well done!
And, no... i analyzed billions of records with subsecond response times
using this technology called OLAP. recordcount becomes irrelevent once you
start storing aggregations in multi-dimensional structures.
...
No it doesn't. You have possibly many CPUs. Each of those CPUs has a
max data throughput no faster than its clock speed. If you have 4 CPUs
running in parallel each with 2GHz clock speeds, the fastest you could
run through 1 billion records each of 100 bytes would be 3 1/3 seconds.
That assumes no OS overhead, no parallel processing synchronization
overhead, and 32-bit word throughput equal to clock speed. No way!
What would almost certainly be going on is caching of intermediate
results and indexing to skip unneeded records. Definite benefits to
that, but such benefits could be achieved by other means as well. But
the speeds you're reporting are *not* the result of querying the
underlying database.
That's the way Essbase and Applix TM/1 work, and (FWLIW) the way VP
Planner's 'Multidimensional Database' worked back in the late 1980s.
I just know that most of these accounting and finance problems that you
think are _SO_COMPLEX_ are easily solveable using industry-standard
databases.
Really? How do you use a database to hedge a commercial bond portfolio
against liquidity and call risk using interest rate derivatives? How do
you use databases to calculate risk based capital for banks or
insurance companies requiring estimates of confidence intervals of
statistical distributions? How do you use a database to decide between
alternative aircraft designs using different types of jet engines with
different thrust and fuel consumption characteristics? How do you use a
database to construct the optimal course schedule for a university for
a given term, where optimality is the measured as the greatest number
of students able to take the greatest number of their first choice
courses given class size and/or classroom seating constraints?
There really are real world problems that require more than counting
and totalling. No question databases would often be the ideal storage
subsystems for such applications, but most of the real work would need
to be done by procedural or functional code. Not necessarily
spreadsheets, but neither via SQL nor OLAP.
And everyone would be better off to stop using Excel and to start using
Access.
For database-like tasks, agreed as long as they have Access. In case
you've missed this point in previous messages, not every version of
Office comes with Access, but all come with Excel. One can only use the
tools one has.
For decidedly non-database tasks, it'd be pure foolishness to use
Access. Which is why you'd use it.
I know that the RDBMS solutions are a better way to analyze records and
build reports than using Microsoft excel.
Granted. However, to repeat yet again, not everything done in Excel
involves generating reports from records. It may be theoretically
possible to bludgeon any application's data into normalized tables and
create data structures from SQL queries, but it makes no more sense to
do so for most non-report spreadsheets than it would to use an RDBMS to
implement an bitmap drawing package or compressed file archiver.
You don't need to try to slam my experiences.. You're the one that is
obsolete kid
So far you've demonstrated that you don't know
1. what an amortization table is,
2. what calculation-intensive software (what's run on supercomputers
and clusters) is,
3. how to structure currency conversion rate tables,
4. anything about financial services businesses,
5. what MDX really provides (as opposed to what you believe it
provides) compared to Excel,
6. any statistics beyond what an average is,
7. any DBMS other than SQL Server under Windows,
8. any real time systems,
9. details of the history of the dot.com bubble and bust,
10. what people outside the IT departments really do beyond generating
regular, repetitive reports.
Maybe you have business experience beyond writing database reporting
apps that any other moderately competent DB programmer could have
written, but you haven't demonstrated any subject knowledge, not even
any detailed SQL knowledge (your pathetic attempt at a rough
description of how you'd put together an amortization table in a DBMS
was particularly indicative of the shallowness and narrowness of your
capabilities).