WHY

  • Thread starter Thread starter Guest
  • Start date Start date
RE:

If you mean reports, then I don't disagree that databases could be more
efficient in some instances. However, there's a question of how many
calculations (not the simple database sort of accumulations - SUM,
COUNT, AVERAGE, etc. - but rather more complex ones like the a_th
percentile of a statistical distribution with parameters based on
sample data) are needed to produce the results of interest.


This type of person should be locked up in jail for being egocentric.

SQL Server has the ability to make user defined functions.

No, really?!

Most useful software possesses the attributes of modularization,
programmable extension and the ad hoc ability to run other software.

That said, while one could write a udf for SQL RDBMSs to calculate, say,
estimators and standard errors for linear models, it'd unpleasant to do so.
That sort of thing is built into modern spreadsheets but not databases.

Now one could use a database as the storage backend for a stats package, but
it'd be the stats package that does the real work, and only after it loads
the data into its own data structures. I agree that databases are very good
storage and retrieval subsystems, but I remain unconvinced that the forms
and reporting tools provided by most databases are vastly and uniformly
superior to the alternative provided by spreadsheets (when properly
designed). But calculations using udfs leads to UI/front-end vs
processing/back-end interface issues.

Consider something as simple as the currency conversion rate table discussed
before. If there were no conversion fees reflected in the conversion rates,
then one would expect, e.g., the conversion rate from US dollars to UK
pounds to be the reciprocal of UK pounds to US dollars. So if the foreigh
exchange markets were perfectly efficient, the matrix of conversion rates
should be characterized by the entries in its lower triangular matrix being
the reciprocals of the entries in its upper triangular matrix. In Excel
terms,

=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)

where ConvRates is the N-by-N matrix of conversion rates for N currencies,
which are unity along the main diagonal.

This can be done with an inner join of the table with itself swapping the
country fields in the second reference into the table. Something like

SELECT (Count(*) > 0)
FROM (SELECT [CT1].[ConvRate] = 1/[CT2].[ConvRate]
FROM CurrTbl As CT1 INNER JOIN CurrTbl AS CT2
ON (CT1.ToCountry = CT2.FromCountry) AND (CT1.FromCountry = CT2.ToCountry)
WHERE ((Abs([CT1].[ConvRate] - 1/[CT2].[ConvRate]) < 1E-12)));

Does the latter really look clearer to you?
Anythign that you can do in Excel; i can do in either MDX or SQL. And I'll
do it once; and make a couple of DTS packages; and I am done-- I don't need
to come back and type stuff into a spreadsheet in order to make a new
report..

Further to the example above, I could parametrize the table name in Excel,
so the formula would become

=AND(ABS(TRANSPOSE(INDIRECT(CTN))-1/INDIRECT(CTN))<1E-12)

How do you do that in SQL without resorting to some metalanguage or creating
temporary tables using common, reserved names?
I tell you this-- if spreadsheets were all-powerful. you know those
supercomputers that they have?? They would be runnnig Excel.

Who wastes money running databases on supercomputers or networked clusters?
No one in their right mind. They run hand-crafted FORTRAN or C code if they
want to get anywhere past single-digit gigaflops. Do you believe such
programs rely on realtime database feeds? No way. They use cached,
multiplexed data pipelines. And they don't waste runtime writing results to
databases but to many synchronized output streams. Possibly databases
populate the inputs and store the outputs eventually, but this is just
another example of databases being the backend storage subsystem. A useful
supporting role to be sure, but hardly center stage.
If Excel was really the best solution-- people would have Clusters of
spreadsheets. Excel 2005 Cluster Edition...

No more than they'd have clusters of DBMSs. Neither are the right tool for
the task. And no one uses supercomputers of clusters for generating reports.

Time for your next straw man.
explicit and reproducible.

They're reproducible because you CUT AND PASTE THE FORMULAS INTO 1,000
DIFFERENT CELLS.

Yup. No one said audit trails are storage efficient or free from unintended
screw-ups. Rather, it's easier to locate such screw-ups.
I CAN DO THE SAME THING ON THE DATABASE SIDE--

No you can't. The source tables, the definitions of the views or the queries
could be modified. If a user calls the same named stored procedure in March
and April, that user has no guarantee other than the word of their database
admin that nothing has changed other than the addition of data from the
month of April.

Historically this has been addressed in mainframe reporting by including
checksums, tape volume IDs, record counts and other stats derived from
inputs along with full JCL and key procedure listings in printouts. To
repeat, no one said audit trails were storage efficient.
BUT I CHOOSE TO KEEP MY BUSINESS LOGIC IN ONE PLACE-- SO THAT IF I NEED TO
CHANGE A REPORT: I CHANGE IT IN ONE PLACE.
....

It's the need to prove there have been no changes in that business logic
between report runs that's the nasty problem. It's actually pretty easy to
show the formulas in two different workbooks are identical or substantially
similar. As long as printouts (which could be text files) of business logic
are included in the master copies of reports, there's a true audit trail.

As for change in one place, that is a definite advantage of databases.
However, using standard templates as the basis for spreadsheet reports also
provides centralized change management.

In spreadsheets the key is to separate storage of user inputs from
calculations. [IMO, this is much easier in Lotus 123 and Quattro Pro than
Excel.] Then the only thing that would need to be stored would be the user
inputs. All the 'business logic' (dare I call it formulas and macros?) would
remain in a separate, centrally stored and maintained workbook.
IT IS PHYSCIALLY AND PRACTICALLY IMPOSSIBLE TO GET DATA OUT OF
SPREADSHEETS.

If you don't know how to do something, then it'll seem impossible.

Getting information out of spreadsheets does require knowing the workbook's
filename and the worksheet ranges in it where that information is located.
If no proper documentation of filenames or worksheet/range addresses exists,
it can take considerable effort to locate the information. But if we're
talking about standard reports, it's easy as long as a consistent layout has
been maintained (or better still, common defined names have been used to
identify the information of interest).
I SHOUD KNOW. I HAVE WRITTEN DOZENS OF APPLICATIONS THAT CATALOG
SPREADSHEETS-- BECAUSE WINDOWS INDEXING SERVICE SUX-- I BUILD A DATABASE
WHERE I DO A FULLTEXT INDEX ON HUNDREDS, IF NOT THOUSANDS OF SPREADSHEETS.

Windows indexing is an OS feature, no? Microsft still to blame, but not the
Excel or Office developers.

You'd be better off using Perl to create such indices. Of course that
assumes you have the capability of learning anything other than SQL. An
associative array each entry of which is in turn a reference to an
associative array is a very powerful data structure for generalized text
search and retrieval.
AS IT IS; IT IS A TOTAL WASTE OF TIME THAT 80% OF YOU RECREATE THE SAME
REPORT BY HAND EVERY MONTH.

Maybe you and/or people in the company for which you work do so, but that
doesn't mean everyone does.
MDX IS GOING TO EAT YOU ALIVE, KIDS

I'm shaking in my shoes, soiling my pants, won't sleep ever again.
 
NoNoBadDog! said:
I would suggest that the mods block this uneducated, snot nosed idiot from
the board. . . .
....

This is an unmoderated newsgroup. Ain't free speech great?!

Also, one must endevor to be patient with children.
 
NoNoBadDog! said:
If the mods dont ban you,

This is a (mostly) unmoderated newsgroup. MS does have some filtering,
obviously, on its own servers, but this appears to be for spam purposes.
I can only hope you suffer a stroke that makes it impossible for you
to use a keyboard.

Frankly, to me, that statement is far more offensive than anything
Aaron's written. It warrants putting you on my watch list, one step from
my own killfile.

Aaron is a silly young zealot with a zealot's perspective, that's all.
Why should any proposed "moderators" block him? Your killfile is a good
solution, but I wouldn't want anyone else deciding to a greater extent
which content is acceptable for me. Would I risk being blocked when I
tell a poster that s/he'd be better off using a database application (I
doubt I could recommend Access, but I work with more robust RDBMS
frequently).

What have Aaron's tantrums done to hurt you or anyone else? They've
provided Harlan with some finger exercise, and those that read the
thread a chance to learn or rethink the issues he raises.

Aaron becomes his cause's own worst enemy by ranting rather than giving
rational arguments and examples, and he's blind to real-world situations
outside his domain, but he has at least some valid points.
In the years that I have browsed these groups, I have seen some pretty sorry
excuses for a carbon based life form, but you are by far the most ignorant,
self righteous and immature I have ever seen.

Surely this is hyperbole? There've been a number of other trolls who
would compete for that honor in my book...
Grow up, realize that others may and most likely do have opinions counter to
your own, and then go away.

I, for one, hope he stays, if he can get his self-righteousness under
control and listen to others' perspectives (and comes to understand that
not everybody uses XL just to create the same report every month). I
would love to see his SQL amortization application...
 
Hi J.E.
[...]
I, for one, hope he stays, if he can get his self-righteousness under
control and listen to others' perspectives (and comes to understand
that not everybody uses XL just to create the same report every
month). I would love to see his SQL amortization application...

me too <vbg>
In this case I like Harlan's posting: Lot of interesting stuff to
read/re-think.

Frank
 
Also, one must endeavor to be patient with children.
Harlan, I didn't know this was one of your traits <G>
 
hi sur
so can we get Microsoft to make it so that when you launch Access; it will
automagically start a new database; and then it makes you save it when you
hit save?

make it more similiar to Excel and Word?

Just a lil bit of slight-of-hand would make this possible; i could probably
edit one of the MDW wizards and get this done in about 10 mintues

and then our precious beancounters could start typing right when they open
Access?

it would save them time-- it should at least be an option.

I create a new MDB or ADP probably 10 times per day anyways.. so it would
probably save me some time also

-Aaron
 
the only thing that is 'built into a spreadsheet' is the ability to flush
hours down the toilet.

If you're too narrow-minded to consider that doign _SOME_ of the work on the
database side makes sense.. then uh.. go ahead and write spreadsheets for
the rest of your life.

The thing that will please you nasty beancounters the most-- if you want to
push dfata into a spreadsheet; access supports this functionality. Once
you get data into a database, it is EASY to pull it into Excel.

Access is a 2 way street; SQL is a 2 way street.

Importing data out of a spreadsheet is BROKEN, DOESN'T WORK AND IT NEVER
HAS. It is impossibly to have adequate field validation in Excel... by
definition; it is impossible. In Access, there are input masks and all
sorts of tools that make it easy for a 3rd grader to have accurate,
consistent data.

You spend all of your time typign numbers from Excel; and you have this
house of cards-- formulas on top of formulas.. there ISN"T a decent way to
check for broken formulas.. (unless you like priting out the formulas and
LOOKING for them GAG)

Microsoft just doesn't test it or something.

And about perl, shove it up your a$$; VBA is the _ONLY_ language in the
world. If you want to index data; store it in SQL-- if it is too slow; then
give it an OLAP interface. It is simple simple simple stuff if you knew
anything about databases.

Do you really think that this:
"=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)" is intuitive?

I think that the SQL Statement is perfectly intuitive.. but after all of
your bitching you guys still dont understand that I could duplicate the
storage of your matrix in a db and it would be this:

select USD from currencyconvertor where source 'GBP'

I can use the exact same format you use; and it is twice as easy to get to
this data.

get out of your cubicle and get into the real world kid

All I know is that companies-- AS A WHOLE-- need to hire about 2x as many
database people as they currently do.. and then need to fire about half of
their beancounters.

Being able to automate beancounters is what the 'pc revolution' is all
about.

You guys are targets; in an ideal world-- we would have automated your job
already.. but the sad thing is that there is this DISEASE called EXCEL and
people think that it is ACCEPTABLE to print the same report by hand every
month.

Grow up and welcome to the future



Harlan Grove said:
RE:

If you mean reports, then I don't disagree that databases could be more
efficient in some instances. However, there's a question of how many
calculations (not the simple database sort of accumulations - SUM,
COUNT, AVERAGE, etc. - but rather more complex ones like the a_th
percentile of a statistical distribution with parameters based on
sample data) are needed to produce the results of interest.


This type of person should be locked up in jail for being egocentric.

SQL Server has the ability to make user defined functions.

No, really?!

Most useful software possesses the attributes of modularization,
programmable extension and the ad hoc ability to run other software.

That said, while one could write a udf for SQL RDBMSs to calculate, say,
estimators and standard errors for linear models, it'd unpleasant to do so.
That sort of thing is built into modern spreadsheets but not databases.

Now one could use a database as the storage backend for a stats package, but
it'd be the stats package that does the real work, and only after it loads
the data into its own data structures. I agree that databases are very good
storage and retrieval subsystems, but I remain unconvinced that the forms
and reporting tools provided by most databases are vastly and uniformly
superior to the alternative provided by spreadsheets (when properly
designed). But calculations using udfs leads to UI/front-end vs
processing/back-end interface issues.

Consider something as simple as the currency conversion rate table discussed
before. If there were no conversion fees reflected in the conversion rates,
then one would expect, e.g., the conversion rate from US dollars to UK
pounds to be the reciprocal of UK pounds to US dollars. So if the foreigh
exchange markets were perfectly efficient, the matrix of conversion rates
should be characterized by the entries in its lower triangular matrix being
the reciprocals of the entries in its upper triangular matrix. In Excel
terms,

=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)

where ConvRates is the N-by-N matrix of conversion rates for N currencies,
which are unity along the main diagonal.

This can be done with an inner join of the table with itself swapping the
country fields in the second reference into the table. Something like

SELECT (Count(*) > 0)
FROM (SELECT [CT1].[ConvRate] = 1/[CT2].[ConvRate]
FROM CurrTbl As CT1 INNER JOIN CurrTbl AS CT2
ON (CT1.ToCountry = CT2.FromCountry) AND (CT1.FromCountry = CT2.ToCountry)
WHERE ((Abs([CT1].[ConvRate] - 1/[CT2].[ConvRate]) < 1E-12)));

Does the latter really look clearer to you?
Anythign that you can do in Excel; i can do in either MDX or SQL. And I'll
do it once; and make a couple of DTS packages; and I am done-- I don't need
to come back and type stuff into a spreadsheet in order to make a new
report..

Further to the example above, I could parametrize the table name in Excel,
so the formula would become

=AND(ABS(TRANSPOSE(INDIRECT(CTN))-1/INDIRECT(CTN))<1E-12)

How do you do that in SQL without resorting to some metalanguage or creating
temporary tables using common, reserved names?
I tell you this-- if spreadsheets were all-powerful. you know those
supercomputers that they have?? They would be runnnig Excel.

Who wastes money running databases on supercomputers or networked clusters?
No one in their right mind. They run hand-crafted FORTRAN or C code if they
want to get anywhere past single-digit gigaflops. Do you believe such
programs rely on realtime database feeds? No way. They use cached,
multiplexed data pipelines. And they don't waste runtime writing results to
databases but to many synchronized output streams. Possibly databases
populate the inputs and store the outputs eventually, but this is just
another example of databases being the backend storage subsystem. A useful
supporting role to be sure, but hardly center stage.
If Excel was really the best solution-- people would have Clusters of
spreadsheets. Excel 2005 Cluster Edition...

No more than they'd have clusters of DBMSs. Neither are the right tool for
the task. And no one uses supercomputers of clusters for generating reports.

Time for your next straw man.
explicit and reproducible.

They're reproducible because you CUT AND PASTE THE FORMULAS INTO 1,000
DIFFERENT CELLS.

Yup. No one said audit trails are storage efficient or free from unintended
screw-ups. Rather, it's easier to locate such screw-ups.
I CAN DO THE SAME THING ON THE DATABASE SIDE--

No you can't. The source tables, the definitions of the views or the queries
could be modified. If a user calls the same named stored procedure in March
and April, that user has no guarantee other than the word of their database
admin that nothing has changed other than the addition of data from the
month of April.

Historically this has been addressed in mainframe reporting by including
checksums, tape volume IDs, record counts and other stats derived from
inputs along with full JCL and key procedure listings in printouts. To
repeat, no one said audit trails were storage efficient.
BUT I CHOOSE TO KEEP MY BUSINESS LOGIC IN ONE PLACE-- SO THAT IF I NEED TO
CHANGE A REPORT: I CHANGE IT IN ONE PLACE.
...

It's the need to prove there have been no changes in that business logic
between report runs that's the nasty problem. It's actually pretty easy to
show the formulas in two different workbooks are identical or substantially
similar. As long as printouts (which could be text files) of business logic
are included in the master copies of reports, there's a true audit trail.

As for change in one place, that is a definite advantage of databases.
However, using standard templates as the basis for spreadsheet reports also
provides centralized change management.

In spreadsheets the key is to separate storage of user inputs from
calculations. [IMO, this is much easier in Lotus 123 and Quattro Pro than
Excel.] Then the only thing that would need to be stored would be the user
inputs. All the 'business logic' (dare I call it formulas and macros?) would
remain in a separate, centrally stored and maintained workbook.
IT IS PHYSCIALLY AND PRACTICALLY IMPOSSIBLE TO GET DATA OUT OF
SPREADSHEETS.

If you don't know how to do something, then it'll seem impossible.

Getting information out of spreadsheets does require knowing the workbook's
filename and the worksheet ranges in it where that information is located.
If no proper documentation of filenames or worksheet/range addresses exists,
it can take considerable effort to locate the information. But if we're
talking about standard reports, it's easy as long as a consistent layout has
been maintained (or better still, common defined names have been used to
identify the information of interest).
I SHOUD KNOW. I HAVE WRITTEN DOZENS OF APPLICATIONS THAT CATALOG
SPREADSHEETS-- BECAUSE WINDOWS INDEXING SERVICE SUX-- I BUILD A DATABASE
WHERE I DO A FULLTEXT INDEX ON HUNDREDS, IF NOT THOUSANDS OF
SPREADSHEETS.

Windows indexing is an OS feature, no? Microsft still to blame, but not the
Excel or Office developers.

You'd be better off using Perl to create such indices. Of course that
assumes you have the capability of learning anything other than SQL. An
associative array each entry of which is in turn a reference to an
associative array is a very powerful data structure for generalized text
search and retrieval.
AS IT IS; IT IS A TOTAL WASTE OF TIME THAT 80% OF YOU RECREATE THE SAME
REPORT BY HAND EVERY MONTH.

Maybe you and/or people in the company for which you work do so, but that
doesn't mean everyone does.
MDX IS GOING TO EAT YOU ALIVE, KIDS

I'm shaking in my shoes, soiling my pants, won't sleep ever again.
 
people dont' have to be comfortable using databases, i make a form in 5
minutes with a button.

you press the button, the report gets printed.

it's all about saving money by allowing companies to run the same report
over and over again without having to call in a 'Spreadsheet Expert'
whenever something goes wrong
 
yes, programmers are a waste of time.

90% of VB projects involve pulling information out of a DB; or pushing it
into a DB.

Most programming languages out there are obsolete.. C++ or C# or Perl-- give
me a break.

www.sqlmag.com has a new report that says that like 80% of the top database
people in the country use VB or VBA or VB.net.

it's like a dead issue. VB won the war.

And Access Data Projects are the best platform for VB-SQL development.

It is 3x easier to write somethign in ADP/MDB than it is in VB; or any other
language.
 
I've been working a lot longer than that LoL

I just think that it is funny that all of these people get sidetracked..
developing 'solutions' in Excel.

It's just absolutely comical to me..

re-creating the same spreadsheet, week in and week out..

templates' don't help-- what happens when they change??

do you have to go back and 'untemplate' everything you do and push it into a
new template?

LoL
 
Aaron
Importing data out of a spreadsheet is BROKEN, DOESN'T WORK AND IT NEVER
HAS.

Surely you import in and export out?

Anyhow, how is it broken. The number of times I link Excel tables into
Access each week and run append, select, delete queries is immeasurable.
After that I run the data into Excel using ODBC to form an intuitive pivot
table, chart, etc.

When will you realise that interoperability and 'right tool for the job' is
key, not a blind hatred for one or the other?

--
HTH
Nick Hodge
Microsoft MVP - Excel
Southampton, England
(e-mail address removed)


the only thing that is 'built into a spreadsheet' is the ability to flush
hours down the toilet.

If you're too narrow-minded to consider that doign _SOME_ of the work on
the
database side makes sense.. then uh.. go ahead and write spreadsheets for
the rest of your life.

The thing that will please you nasty beancounters the most-- if you want
to
push dfata into a spreadsheet; access supports this functionality. Once
you get data into a database, it is EASY to pull it into Excel.

Access is a 2 way street; SQL is a 2 way street.

Importing data out of a spreadsheet is BROKEN, DOESN'T WORK AND IT NEVER
HAS. It is impossibly to have adequate field validation in Excel... by
definition; it is impossible. In Access, there are input masks and all
sorts of tools that make it easy for a 3rd grader to have accurate,
consistent data.

You spend all of your time typign numbers from Excel; and you have this
house of cards-- formulas on top of formulas.. there ISN"T a decent way to
check for broken formulas.. (unless you like priting out the formulas and
LOOKING for them GAG)

Microsoft just doesn't test it or something.

And about perl, shove it up your a$$; VBA is the _ONLY_ language in the
world. If you want to index data; store it in SQL-- if it is too slow;
then
give it an OLAP interface. It is simple simple simple stuff if you knew
anything about databases.

Do you really think that this:
"=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)" is intuitive?

I think that the SQL Statement is perfectly intuitive.. but after all of
your bitching you guys still dont understand that I could duplicate the
storage of your matrix in a db and it would be this:

select USD from currencyconvertor where source 'GBP'

I can use the exact same format you use; and it is twice as easy to get to
this data.

get out of your cubicle and get into the real world kid

All I know is that companies-- AS A WHOLE-- need to hire about 2x as many
database people as they currently do.. and then need to fire about half of
their beancounters.

Being able to automate beancounters is what the 'pc revolution' is all
about.

You guys are targets; in an ideal world-- we would have automated your job
already.. but the sad thing is that there is this DISEASE called EXCEL and
people think that it is ACCEPTABLE to print the same report by hand every
month.

Grow up and welcome to the future



Harlan Grove said:
RE:

If you mean reports, then I don't disagree that databases could be more
efficient in some instances. However, there's a question of how many
calculations (not the simple database sort of accumulations - SUM,
COUNT, AVERAGE, etc. - but rather more complex ones like the a_th
percentile of a statistical distribution with parameters based on
sample data) are needed to produce the results of interest.


This type of person should be locked up in jail for being egocentric.

SQL Server has the ability to make user defined functions.

No, really?!

Most useful software possesses the attributes of modularization,
programmable extension and the ad hoc ability to run other software.

That said, while one could write a udf for SQL RDBMSs to calculate, say,
estimators and standard errors for linear models, it'd unpleasant to do so.
That sort of thing is built into modern spreadsheets but not databases.

Now one could use a database as the storage backend for a stats package, but
it'd be the stats package that does the real work, and only after it
loads
the data into its own data structures. I agree that databases are very good
storage and retrieval subsystems, but I remain unconvinced that the forms
and reporting tools provided by most databases are vastly and uniformly
superior to the alternative provided by spreadsheets (when properly
designed). But calculations using udfs leads to UI/front-end vs
processing/back-end interface issues.

Consider something as simple as the currency conversion rate table discussed
before. If there were no conversion fees reflected in the conversion rates,
then one would expect, e.g., the conversion rate from US dollars to UK
pounds to be the reciprocal of UK pounds to US dollars. So if the foreigh
exchange markets were perfectly efficient, the matrix of conversion rates
should be characterized by the entries in its lower triangular matrix being
the reciprocals of the entries in its upper triangular matrix. In Excel
terms,

=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)

where ConvRates is the N-by-N matrix of conversion rates for N
currencies,
which are unity along the main diagonal.

This can be done with an inner join of the table with itself swapping the
country fields in the second reference into the table. Something like

SELECT (Count(*) > 0)
FROM (SELECT [CT1].[ConvRate] = 1/[CT2].[ConvRate]
FROM CurrTbl As CT1 INNER JOIN CurrTbl AS CT2
ON (CT1.ToCountry = CT2.FromCountry) AND (CT1.FromCountry = CT2.ToCountry)
WHERE ((Abs([CT1].[ConvRate] - 1/[CT2].[ConvRate]) < 1E-12)));

Does the latter really look clearer to you?
Anythign that you can do in Excel; i can do in either MDX or SQL. And I'll
do it once; and make a couple of DTS packages; and I am done-- I don't need
to come back and type stuff into a spreadsheet in order to make a new
report..

Further to the example above, I could parametrize the table name in
Excel,
so the formula would become

=AND(ABS(TRANSPOSE(INDIRECT(CTN))-1/INDIRECT(CTN))<1E-12)

How do you do that in SQL without resorting to some metalanguage or creating
temporary tables using common, reserved names?
I tell you this-- if spreadsheets were all-powerful. you know those
supercomputers that they have?? They would be runnnig Excel.

Who wastes money running databases on supercomputers or networked clusters?
No one in their right mind. They run hand-crafted FORTRAN or C code if they
want to get anywhere past single-digit gigaflops. Do you believe such
programs rely on realtime database feeds? No way. They use cached,
multiplexed data pipelines. And they don't waste runtime writing results to
databases but to many synchronized output streams. Possibly databases
populate the inputs and store the outputs eventually, but this is just
another example of databases being the backend storage subsystem. A
useful
supporting role to be sure, but hardly center stage.
If Excel was really the best solution-- people would have Clusters of
spreadsheets. Excel 2005 Cluster Edition...

No more than they'd have clusters of DBMSs. Neither are the right tool
for
the task. And no one uses supercomputers of clusters for generating reports.

Time for your next straw man.
explicit and reproducible.
They're much less so in other systems, including DBMSs.

They're reproducible because you CUT AND PASTE THE FORMULAS INTO 1,000
DIFFERENT CELLS.

Yup. No one said audit trails are storage efficient or free from unintended
screw-ups. Rather, it's easier to locate such screw-ups.
I CAN DO THE SAME THING ON THE DATABASE SIDE--

No you can't. The source tables, the definitions of the views or the queries
could be modified. If a user calls the same named stored procedure in March
and April, that user has no guarantee other than the word of their database
admin that nothing has changed other than the addition of data from the
month of April.

Historically this has been addressed in mainframe reporting by including
checksums, tape volume IDs, record counts and other stats derived from
inputs along with full JCL and key procedure listings in printouts. To
repeat, no one said audit trails were storage efficient.
BUT I CHOOSE TO KEEP MY BUSINESS LOGIC IN ONE PLACE-- SO THAT IF I NEED TO
CHANGE A REPORT: I CHANGE IT IN ONE PLACE.
...

It's the need to prove there have been no changes in that business logic
between report runs that's the nasty problem. It's actually pretty easy
to
show the formulas in two different workbooks are identical or substantially
similar. As long as printouts (which could be text files) of business logic
are included in the master copies of reports, there's a true audit trail.

As for change in one place, that is a definite advantage of databases.
However, using standard templates as the basis for spreadsheet reports also
provides centralized change management.

In spreadsheets the key is to separate storage of user inputs from
calculations. [IMO, this is much easier in Lotus 123 and Quattro Pro than
Excel.] Then the only thing that would need to be stored would be the
user
inputs. All the 'business logic' (dare I call it formulas and macros?) would
remain in a separate, centrally stored and maintained workbook.
IT IS PHYSCIALLY AND PRACTICALLY IMPOSSIBLE TO GET DATA OUT OF
SPREADSHEETS.

If you don't know how to do something, then it'll seem impossible.

Getting information out of spreadsheets does require knowing the workbook's
filename and the worksheet ranges in it where that information is
located.
If no proper documentation of filenames or worksheet/range addresses exists,
it can take considerable effort to locate the information. But if we're
talking about standard reports, it's easy as long as a consistent layout has
been maintained (or better still, common defined names have been used to
identify the information of interest).
I SHOUD KNOW. I HAVE WRITTEN DOZENS OF APPLICATIONS THAT CATALOG
SPREADSHEETS-- BECAUSE WINDOWS INDEXING SERVICE SUX-- I BUILD A DATABASE
WHERE I DO A FULLTEXT INDEX ON HUNDREDS, IF NOT THOUSANDS OF
SPREADSHEETS.

Windows indexing is an OS feature, no? Microsft still to blame, but not the
Excel or Office developers.

You'd be better off using Perl to create such indices. Of course that
assumes you have the capability of learning anything other than SQL. An
associative array each entry of which is in turn a reference to an
associative array is a very powerful data structure for generalized text
search and retrieval.
AS IT IS; IT IS A TOTAL WASTE OF TIME THAT 80% OF YOU RECREATE THE SAME
REPORT BY HAND EVERY MONTH.

Maybe you and/or people in the company for which you work do so, but that
doesn't mean everyone does.
MDX IS GOING TO EAT YOU ALIVE, KIDS

I'm shaking in my shoes, soiling my pants, won't sleep ever again.
 
Hi
this just shows that you don't knwo at least financial institutions. You
only seem to knwo a pure Microsoft environment. You should know that a lot
of huge business applications either run on a Mainframe or a Unix system. So
have fun with your VB / VBA / VB.NET knowledge on these systems.

VB / VB.Net is a really nice tools for building frontends. Agreed. But not
at all for real transaction processing. You really should learn something
about professional development. And if you don't know such environments you
really haven't dealt with huge data volumes.

Even Cobol and Fortran isn't dead (and probably won't be for the next 5-10
years).
 
Nick Hodge said:
When will you realise that interoperability and 'right tool for the job' is
key, not a blind hatred for one or the other?

He won't, Nick. He's a zealot, and like most zealots, has a poor
understanding of anything outside his narrow perspective.

I just wish he could understand how he comes across. His purported aim,
conversion, is obviously second to his desire to troll. Unfortunately,
he's not very good at the latter - he simply appears ridiculous.
 
Frank Kabel wrote...
....
Even Cobol and Fortran isn't dead (and probably won't be for the next 5-10
years).

COBOL may be one thing, but FORTRAN will die only when netlib, LINPACK,
EISPACK, etc. are translated into other languages. But if FORTRAN code
can be compiled into reentrant libraries (which it can on mainframe,
Unix-like, Windows and presumably Mac (JE - confirmation?) platforms),
why bother? FORTRAN will be around for a long time to come.
 
MacOS is built on Darwin, which is FreeBSD 5, so anything that compiles
in standard Unix environments will compile on the Mac.

Whether there's a native MacOS front-end is another story, but it's not
all that important - Apple's X Window system, X11, runs fine along with
MacOS.
 
vb and SQL server can do anything that those other languages can do.

I work at Safeco, of course I know about those legacy (obsolete) databases.

They're eventually all going to be replace with SQL Server 2005.

I do _some_ VB transaction processing with MTS (going back to NT4) or COM+

It handles and scales just as well as other platforms; especially with .net
framework.

and take your attitude about 'professional development' and shove it up your
a$$. Vb is a VERY mature language..

you should learn how to use a 'professional reporting tool' like crystal
reports or _ACCESS_
 
hahahahahahahahahha

give me a break; you can't link to Excel reliably.. what if someone enters a
number in a date column LoL

I do have a blind hatred for Excel.. but I'm also pretty decent with it.

I just know that there are MILLIONS of people out there that create the same
report week in and week out.. and I am here to tell you guys to grow up and
start usign a real platform.

Excel is dead.
Excel is dead.
Excel is dead.
Excel is dead.
Excel is dead.
Excel is dead.
Excel is dead.



Nick Hodge said:
Aaron
Importing data out of a spreadsheet is BROKEN, DOESN'T WORK AND IT NEVER
HAS.

Surely you import in and export out?

Anyhow, how is it broken. The number of times I link Excel tables into
Access each week and run append, select, delete queries is immeasurable.
After that I run the data into Excel using ODBC to form an intuitive pivot
table, chart, etc.

When will you realise that interoperability and 'right tool for the job' is
key, not a blind hatred for one or the other?

--
HTH
Nick Hodge
Microsoft MVP - Excel
Southampton, England
(e-mail address removed)


the only thing that is 'built into a spreadsheet' is the ability to flush
hours down the toilet.

If you're too narrow-minded to consider that doign _SOME_ of the work on
the
database side makes sense.. then uh.. go ahead and write spreadsheets for
the rest of your life.

The thing that will please you nasty beancounters the most-- if you want
to
push dfata into a spreadsheet; access supports this functionality. Once
you get data into a database, it is EASY to pull it into Excel.

Access is a 2 way street; SQL is a 2 way street.

Importing data out of a spreadsheet is BROKEN, DOESN'T WORK AND IT NEVER
HAS. It is impossibly to have adequate field validation in Excel... by
definition; it is impossible. In Access, there are input masks and all
sorts of tools that make it easy for a 3rd grader to have accurate,
consistent data.

You spend all of your time typign numbers from Excel; and you have this
house of cards-- formulas on top of formulas.. there ISN"T a decent way to
check for broken formulas.. (unless you like priting out the formulas and
LOOKING for them GAG)

Microsoft just doesn't test it or something.

And about perl, shove it up your a$$; VBA is the _ONLY_ language in the
world. If you want to index data; store it in SQL-- if it is too slow;
then
give it an OLAP interface. It is simple simple simple stuff if you knew
anything about databases.

Do you really think that this:
"=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)" is intuitive?

I think that the SQL Statement is perfectly intuitive.. but after all of
your bitching you guys still dont understand that I could duplicate the
storage of your matrix in a db and it would be this:

select USD from currencyconvertor where source 'GBP'

I can use the exact same format you use; and it is twice as easy to get to
this data.

get out of your cubicle and get into the real world kid

All I know is that companies-- AS A WHOLE-- need to hire about 2x as many
database people as they currently do.. and then need to fire about half of
their beancounters.

Being able to automate beancounters is what the 'pc revolution' is all
about.

You guys are targets; in an ideal world-- we would have automated your job
already.. but the sad thing is that there is this DISEASE called EXCEL and
people think that it is ACCEPTABLE to print the same report by hand every
month.

Grow up and welcome to the future



Harlan Grove said:
RE:

If you mean reports, then I don't disagree that databases could be more
efficient in some instances. However, there's a question of how many
calculations (not the simple database sort of accumulations - SUM,
COUNT, AVERAGE, etc. - but rather more complex ones like the a_th
percentile of a statistical distribution with parameters based on
sample data) are needed to produce the results of interest.


This type of person should be locked up in jail for being egocentric.

SQL Server has the ability to make user defined functions.

No, really?!

Most useful software possesses the attributes of modularization,
programmable extension and the ad hoc ability to run other software.

That said, while one could write a udf for SQL RDBMSs to calculate, say,
estimators and standard errors for linear models, it'd unpleasant to do so.
That sort of thing is built into modern spreadsheets but not databases.

Now one could use a database as the storage backend for a stats
package,
but
it'd be the stats package that does the real work, and only after it
loads
the data into its own data structures. I agree that databases are very good
storage and retrieval subsystems, but I remain unconvinced that the forms
and reporting tools provided by most databases are vastly and uniformly
superior to the alternative provided by spreadsheets (when properly
designed). But calculations using udfs leads to UI/front-end vs
processing/back-end interface issues.

Consider something as simple as the currency conversion rate table discussed
before. If there were no conversion fees reflected in the conversion rates,
then one would expect, e.g., the conversion rate from US dollars to UK
pounds to be the reciprocal of UK pounds to US dollars. So if the foreigh
exchange markets were perfectly efficient, the matrix of conversion rates
should be characterized by the entries in its lower triangular matrix being
the reciprocals of the entries in its upper triangular matrix. In Excel
terms,

=AND(ABS(TRANSPOSE(ConvRates)-1/ConvRates)<1E-12)

where ConvRates is the N-by-N matrix of conversion rates for N
currencies,
which are unity along the main diagonal.

This can be done with an inner join of the table with itself swapping the
country fields in the second reference into the table. Something like

SELECT (Count(*) > 0)
FROM (SELECT [CT1].[ConvRate] = 1/[CT2].[ConvRate]
FROM CurrTbl As CT1 INNER JOIN CurrTbl AS CT2
ON (CT1.ToCountry = CT2.FromCountry) AND (CT1.FromCountry = CT2.ToCountry)
WHERE ((Abs([CT1].[ConvRate] - 1/[CT2].[ConvRate]) < 1E-12)));

Does the latter really look clearer to you?

Anythign that you can do in Excel; i can do in either MDX or SQL. And I'll
do it once; and make a couple of DTS packages; and I am done-- I don't need
to come back and type stuff into a spreadsheet in order to make a new
report..

Further to the example above, I could parametrize the table name in
Excel,
so the formula would become

=AND(ABS(TRANSPOSE(INDIRECT(CTN))-1/INDIRECT(CTN))<1E-12)

How do you do that in SQL without resorting to some metalanguage or creating
temporary tables using common, reserved names?

I tell you this-- if spreadsheets were all-powerful. you know those
supercomputers that they have?? They would be runnnig Excel.

Who wastes money running databases on supercomputers or networked clusters?
No one in their right mind. They run hand-crafted FORTRAN or C code if they
want to get anywhere past single-digit gigaflops. Do you believe such
programs rely on realtime database feeds? No way. They use cached,
multiplexed data pipelines. And they don't waste runtime writing
results
to
databases but to many synchronized output streams. Possibly databases
populate the inputs and store the outputs eventually, but this is just
another example of databases being the backend storage subsystem. A
useful
supporting role to be sure, but hardly center stage.

If Excel was really the best solution-- people would have Clusters of
spreadsheets. Excel 2005 Cluster Edition...

No more than they'd have clusters of DBMSs. Neither are the right tool
for
the task. And no one uses supercomputers of clusters for generating reports.

Time for your next straw man.

explicit and reproducible.
They're much less so in other systems, including DBMSs.

They're reproducible because you CUT AND PASTE THE FORMULAS INTO 1,000
DIFFERENT CELLS.

Yup. No one said audit trails are storage efficient or free from unintended
screw-ups. Rather, it's easier to locate such screw-ups.

I CAN DO THE SAME THING ON THE DATABASE SIDE--

No you can't. The source tables, the definitions of the views or the queries
could be modified. If a user calls the same named stored procedure in March
and April, that user has no guarantee other than the word of their database
admin that nothing has changed other than the addition of data from the
month of April.

Historically this has been addressed in mainframe reporting by including
checksums, tape volume IDs, record counts and other stats derived from
inputs along with full JCL and key procedure listings in printouts. To
repeat, no one said audit trails were storage efficient.

BUT I CHOOSE TO KEEP MY BUSINESS LOGIC IN ONE PLACE-- SO THAT IF I
NEED
TO
CHANGE A REPORT: I CHANGE IT IN ONE PLACE.
...

It's the need to prove there have been no changes in that business logic
between report runs that's the nasty problem. It's actually pretty easy
to
show the formulas in two different workbooks are identical or substantially
similar. As long as printouts (which could be text files) of business logic
are included in the master copies of reports, there's a true audit trail.

As for change in one place, that is a definite advantage of databases.
However, using standard templates as the basis for spreadsheet reports also
provides centralized change management.

In spreadsheets the key is to separate storage of user inputs from
calculations. [IMO, this is much easier in Lotus 123 and Quattro Pro than
Excel.] Then the only thing that would need to be stored would be the
user
inputs. All the 'business logic' (dare I call it formulas and macros?) would
remain in a separate, centrally stored and maintained workbook.

IT IS PHYSCIALLY AND PRACTICALLY IMPOSSIBLE TO GET DATA OUT OF
SPREADSHEETS.

If you don't know how to do something, then it'll seem impossible.

Getting information out of spreadsheets does require knowing the workbook's
filename and the worksheet ranges in it where that information is
located.
If no proper documentation of filenames or worksheet/range addresses exists,
it can take considerable effort to locate the information. But if we're
talking about standard reports, it's easy as long as a consistent
layout
has
been maintained (or better still, common defined names have been used to
identify the information of interest).

I SHOUD KNOW. I HAVE WRITTEN DOZENS OF APPLICATIONS THAT CATALOG
SPREADSHEETS-- BECAUSE WINDOWS INDEXING SERVICE SUX-- I BUILD A DATABASE
WHERE I DO A FULLTEXT INDEX ON HUNDREDS, IF NOT THOUSANDS OF SPREADSHEETS.

Windows indexing is an OS feature, no? Microsft still to blame, but not the
Excel or Office developers.

You'd be better off using Perl to create such indices. Of course that
assumes you have the capability of learning anything other than SQL. An
associative array each entry of which is in turn a reference to an
associative array is a very powerful data structure for generalized text
search and retrieval.

AS IT IS; IT IS A TOTAL WASTE OF TIME THAT 80% OF YOU RECREATE THE SAME
REPORT BY HAND EVERY MONTH.

Maybe you and/or people in the company for which you work do so, but that
doesn't mean everyone does.

MDX IS GOING TO EAT YOU ALIVE, KIDS

I'm shaking in my shoes, soiling my pants, won't sleep ever again.
 
COBOL may be one thing, but FORTRAN will die only when netlib,
LINPACK, EISPACK, etc. are translated into other languages. But if
FORTRAN code can be compiled into reentrant libraries (which it can
on mainframe, Unix-like, Windows and presumably Mac (JE -
confirmation?) platforms), why bother? FORTRAN will be around for a
long time to come.

Hi Harlan
though quite OT: for similar reasons I'd assume Cobol will be around as long
as Mainframes are used (at for this my guess is just at least 5.10 years
:-)). Also why bother to migrate existing and working applications to
something else (no business case for that).

Frank
 
Show me riduculous when I see a dozen people making $60k on this single
floor in this single company.. where all they do is type stuff in Excel.

That is what I call ridiculous.

There is a better way; it came out back in the 70s-- Excel shoudln't be used
for reporting out of a database.. It just isn't the best tool for the job.

It isn't POWERFUL enough to do anythign with SQL Server..

The funny thing is that i'm going to work for M$ next month writing
spreadsheets.. LoL

I don't know what I'll do when I hit the 64k limit.. hahahahhahahaha

what a joke

64k records, i poop 64k records at a time
 
vb and SQL server can do anything that those other languages can do.
I work at Safeco, of course I know about those legacy (obsolete)
databases.

They're legacy. Agreed. Obsolete: probably not. You probably have not worked
with them except using them to access data
They're eventually all going to be replace with SQL Server 2005.
lol. You don't believe this. Do you. Personally I think SQL Server is really
a good database. Use it quite often in projects. But we're talking about
really huge databases working on server clusters/mainframes. And for this
SQL Server is definetely not suited. It just does not run on the right
platforms for this kind of stuff. If you don't see this you don't know a lot
about real database / transaction systems. And anyway you wouldn't be able
to calculate a business case for replacing these legacy systems

I do _some_ VB transaction processing with MTS (going back to NT4) or
COM+

It handles and scales just as well as other platforms; especially
with .net framework.

It does not scale on *.nix and Mainframe platforms. And these are still
platforms of choice for real-time and/or batch processing.

and take your attitude about 'professional development' and shove it
up your a$$. Vb is a VERY mature language..
I did not disagree with that. VB is mature. I use it myself and for frontend
development on a Windows platform: probably one of the best choices you
could do. i use the right tool for the right task and do not restrict myself
to only a single one (as you seem to do). But again: There's a world outside
Windows. Please tell me how you use VB on them? (though would be nice to
have it on these platforms...). And if you now argue that Windows will
replace everything else: Well this won't happen in the near future and here
as well choose the right platform for the right task


you should learn how to use a 'professional reporting tool' like
crystal reports or _ACCESS_

:-) I already do (again for the right project/task) and Business Objects and
Access, etc. Maybe just read all the previous answers from me, Harlan and
othes more carefully. We seem to use these tools already but we also use
other tools (if they're more productive). If you can't see this difference
than you seem to be very restrictive in the choice of your tools. And that
is never a good sign.

Frank
 
Back
Top