ANN: Query A Dataset Beta Download

A

Adrian Moore

For anyone who might have visited the web site before or anyone needing a
full SQL query engine against a ADO.NET DataSet, you might be interested in
the assembly I've been working on at http://www.queryadataset.com. It lets
you perform complex SQL SELECT statements including UNION, INNER and OUTER
JOINS, GROUP BY, HAVING, ORDER BY, sub-queries, aggregates etc against the
tables in a DataSet.

A beta version of the assembly plus a sample application is now available
for download.

The web-site allows you to upload your own XML data fragment, DataSet or
resultset and issue queries using the QueryADataSet assembly. If the query
returns the results you expect, then this assembly might be what you are
looking for.

Thanks
Adrian Moore
http://www.queryadataset.com
 
S

Sylvain Lafontaine

Looks very interesting; however, you give no indication about a possible
release date or the price, so it's hard to make a decision about committing
the time/energy to give it a full study (or at least, anything else than
just running the demo).

A suggestion: will it be possible to insert comments into the SQL string?
 
C

Chad Z. Hower aka Kudzu

Adrian Moore said:
For anyone who might have visited the web site before or anyone needing
a full SQL query engine against a ADO.NET DataSet, you might be

It does look interesting - but..
" QueryADataSet can be used in back-end web servers for querying cached
DataSets for presentation."

ADO.NET datasets are *horribly* inefficient. I use them only for transport
and databindings as with small datasets the inefficiency is neglible. But to
use them as you suggest, it would be MUCH more efficient to create storage or
temporary tables in a local cache DB and fetch it from there unless you have
minimal data, in which case you would not need to query it...

I think your idea is an interesting one, but because of the inefficiencies of
the ADO.NET dataset its not useful in the server. It could be useful in
clients, but then again anything of queriable size it would be better to use
a local DB. MSDE can be an installation issue, but there are DBs liek
embedded Firebird which require no installation, just a DLL and an ADO.NET
assembly for the .NET provider.
 
C

Cor Ligthert

Kudzu,

As I read text as "horrible" inefficient, without any explanation why, than
I have the idea that it is a troll.

If you really are Kudzu, than I would at least expect from you that you
write why this is your opinion or at least give us a link where that is
explained.

:)

Cor
 
A

Adrian Moore

Chad,

I'm afraid I have to disagree. It has been my experience while developing
the QueryADataSet component that ADO.NET datasets are extremely efficient.
Perhaps you have a bad experience with the .NET 1.0 version or a particular
feature.

To confirm accurate results, most of the unit tests for the QueryADataSet
component compares its results with those returned from MSDE, including
query time. In all but a few cases, the performance is better than MSDE.
The demo and web-site both include the time to parse the query and return
the result set.

Ad.
 
A

Adrian Moore

Sylvain,

Thanks for the advice. I'll update the site when I get a chance.

To answer your question, comments can be embedded in the SQL statement, they
are correctly ignored by the parser, for example,

select * -- comment
from -- comment
[seller_info] -- comment

returns the same result, with or without the comments.

Ad.
 
A

Adrian Moore

Chad,

One other thing I think that's worth mentioning is that embedded Firebird
doesn't support loading and querying an XML document.

Ad.
 
S

Sahil Malik [MVP]

Chad,

If you don't mind, I'd like to dicuss your perception of Datasets being
inefficient. Could you point out specific instances?

- Sahil Malik [MVP]
Check out my ADO.NET 2.0 book - http://tinyurl.com/9bync
 
W

William \(Bill\) Vaughn

This will be interesting. According to what I've been seeing, DataSets are a
lot more efficient--especially when used correctly. Even DataSet filled with
far too many rows (>10,000) perform far better (esp. in 2.0) than earlier
versions.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com/blog/billva
www.betav.com
www.sqlreportingservices.net
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
F

Frans Bouma [C# MVP]

Sahil said:
I agree Bill !!! Datasets have a lot of undeserved bad impression
about them - which in my mind is expressed by the phrase "Little
knowledge is a dangerous thing".

Though it depends on what you do with the dataset. A normal dataset
with 1 datatable and no constraints, yes that's efficient. The point is
what's it doing after 3 tables with a lot of data are added and the
constraints set have to be checked each time you add a row. That's a
whole different thing.

A dataset by itself is very basic. It contains a list of tables
(hashtable probably), then each datatable has a collection of columns
and a collection of rows. The datarow itself is an array of object, so
very fast to fill with data, and the row collection itself is an
arraylist so very fast to add a row. The indexer on the datarow uses
the table's columns collection to find back a column, if names are
specified, but for the rest, it's a very basic setup, which doesn't
have a lot of bottlenecks. The main issues start when multiple tables
with a lot of data are added and you're adding data which has to be
verified through constraints, that's a whole different thing.

Frans



--
 
C

Cor Ligthert

Frans,

A dataset is by instance very inefficient if you have to delete/remove a lot
of datarows using Net 1.x. This should be better in 2.0.

In 1.x it is sometimes even better to build it completely new as you have to
remove. So there are inefficiencies.

My point is that because of some inefficiencies, you can in my opinion not
tell that it is completely inefficient as Kudzu is doing now.

Cor
 
A

Adrian Moore

Frans

However, a DataSet or XML document cached with data frequently used by the
client or web-server is very efficient to query. This is were the
QueryADataSet assembly becomes very useful.

Ad.
 
A

Adrian Moore

Sylvain,

To answer your question, comments can be embedded in the SQL statement, they
are correctly ignored by the parser, for example,

select * -- comment
from -- comment
[seller_info] -- comment

returns the same result, with or without the comments.

Ad.
 
S

Sahil Malik [MVP]

Of course it depends on what you use it for. I do maintain that for Readonly
data, datasets are an overkill. I wish datasets were written in a plugin
model architecture that allowed you to make them as heavy or as light as you
wanted them to be.

The dataset contains datatables as an arraylist in 1.0 - which means adding
is fast, and removing is a pig. 2.0 implements it as a red black tree -
ditto for DataRows.

Now if you had 3 datatables and you were worried about constraints, you
could always use the BeginLoadData method to prevent constraint checking on
every row added - otherwise - Hey you asked for the constraints :), so
don't crib about something you asked for, especially if datasets do give you
a workaround ( just like databases .. wink wink ;) )

The two operations in which I completely agree - datasets are pigs are -
GetChanges and Merge. The decrease in performance is exponential with the #
of FK-constraints and tables inside the dataset. For 2,3 tables it's not
such a big deal, but when you have 5, 10, 40 datatables (Oh yes I've seen 40
tables in a "catch all strongly typed dataset" in an enterprise app - yes I
puked too) - that is when you start getting serious performance drop.

- Sahil Malik [MVP]
http://codebetter.com/blogs/sahil.malik/
 
S

Sylvain Lafontaine

Here a comment: the ending date for the beta should be after the date of the
first official release (in september or october), not before, with a margin
of operation.

By setting a ending date for the beta finishing to soon, you'll drive away
beta testers; as they don't have the time to download, install, recompile,
re-test and redeploy an application each two weeks.
 
F

Frans Bouma [C# MVP]

Sahil said:
Of course it depends on what you use it for. I do maintain that for
Readonly data, datasets are an overkill. I wish datasets were written
in a plugin model architecture that allowed you to make them as heavy
or as light as you wanted them to be.

I don't think they're overkill per se. In fact, readonly lists
especially fit nicely in the table-oriented approach the dataset
implies. What should be done in that stage of course is no constraints
whatsoever so data addition through a fetch is fast.

Plugin's are a nice idea, though with a generic purpose container, it
leads to problems I think: the generic object has no meaning, the
semantical meaning it has depends on the data it holds. So plugging in
a plugin with behavior can help, but which one to plug in, and what if
you load different data into the dataset?
The dataset contains datatables as an arraylist in 1.0 - which means
adding is fast, and removing is a pig. 2.0 implements it as a red
black tree - ditto for DataRows.

what's a red black tree?
Now if you had 3 datatables and you were worried about constraints,
you could always use the BeginLoadData method to prevent constraint
checking on every row added - otherwise - Hey you asked for the
constraints :), so don't crib about something you asked for,
especially if datasets do give you a workaround ( just like databases
.. wink wink ;) )

heh, but why disable constraints if you have them...
The two operations in which I completely agree - datasets are pigs
are - GetChanges and Merge. The decrease in performance is
exponential with the # of FK-constraints and tables inside the
dataset. For 2,3 tables it's not such a big deal, but when you have
5, 10, 40 datatables (Oh yes I've seen 40 tables in a "catch all
strongly typed dataset" in an enterprise app - yes I puked too) -
that is when you start getting serious performance drop.

true. Give people a large hall and they will be able to fill it to the
roof with crap. :)

FB

--
 
S

Sahil Malik [MVP]

Okay I gotta rush somewhere so I'll reply quickly -

Why disable constraints when u have them. This is a conventional database
trick. If you disable constraints before a bulk data load, you save some
SERIOUS time in the entire load process. That is why.

- Sahil Malik [MVP]
Upcoming ADO.NET 2.0 book - http://tinyurl.com/9bync
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top