Difference between Typed dataset and untyped dataset

T

tulasikumar

hi all,
any one tell me what is the difference between typed dataset and untyped
dataset?
Which situation used typed data set as well which situation used un typed
data set?
which is the best one in real world scenario as per performance level?
Thanks in advance

--
Best Regards,
TulasiKumar
Sr.Analyst Programmer
Web Synergies (India) Private Limited
501, Bandaru Bhavan,
Plot No-3,Yousufguda Main Road,
Hyderabad-500073
Ph: +91-40-66611904
Mobile: +91-9849536526
Website: www.websynergies.in
 
E

Earl

This is a fastball down the middle, but the "differences" part is in the
MSDN. "Real world scenario", it takes longer to set up the typed dataset,
but then you gain the time back in the rest of your development. Performance
is disputed by about everyone. Of the four ADO.Net books that I use for
reference, David Sceppa's book says that performance is 4x faster with typed
than untyped datasets (if you do not use string-based lookups). Sahil
Malik's book says the "answer is unclear". Bill Hamilton's book says that
"If strongly typed functionality is not required, performance is better with
an untyped DataSet rather than with a typed DataSet" (no tests given to back
up this claim). Bill Vaughn's older book says "Strongly typed DataSet object
references are considerably faster than the "late-binding" techniques shown
in most examples." Personally, even if it was a wash, I'd be more inclined
to use the strongly typed datasets simply for the convenience of the
accessors and the data validation.
 
C

Cor Ligthert [MVP]

Taliskumar,

If I see what Earl write, than the authors seems in my idea to follow each
other blind as is often done in books.

For speed just debug the two versions and you see the differences.
The typed dataset has to do many steps more to get its properties and to do
the wanted behaviour than the untyped one.

A big advance from a typed dataset when you are common of that (and use it
like that) is that with a name change of columns, you are direct warned in
your program, where that is not the case with an untyped one and as other
advantage that you can use the intelligence to find the names of tables and
columns.

However this is only partially true, as soon as you start to use expressions
or things like that then the advantage is gone again.

Just my idea,

Cor
 
C

Cor Ligthert [MVP]

To debug it, you have to remove the nondebug directive in the strongly typed
dataset.

Cor
 
M

Miha Markic [MVP C#]

You should always go with a strong typed one unless you don't know the
structure at design time.
Strong typing will shield you from typing errors when structure changes.
Other then that, strongtyped dataset is just a layer above dataset.
 
W

William \(Bill\) Vaughn

Having been quoted out of context (many times), I must respond here.
The Strongly Typed Dataset (STD) is a Visual Studio-generated class that represents the data structure of a specific database table at design time. When it comes time to access this code you will find that it can be easier to code access the specific columns--once you learn how to do so. That's because each column in the table is exposed as a named property. You'll also find that accessing these columns in code is faster as they are strongly typed instead of being stored as objects that need conversion and require that you either use late binding (code to address the column is execute at runtime). This all sounds great. However, you can achieve most of the coding and speed benefits from column enumerations that are far easier to code and recode as the schema changes.

In addition, as I crystallize in my new book, the TableAdapter or STD Dataset is problematic. It assumes:
a.. The schema does not change during development. It is naive to assume that the data schema is fixed at the point where you're ready to freeze the structure into STDs. Changing it post-construction is not easy. In some cases it requires complete tear-down and reconstruction of the TableAdapter and if this is not done correctly, you'll crash the Form.
b.. The DBA exposes base tables. In my experience the majority of serious shops hide base tables and expose SPs and Views instead.
c.. The table CRUD routines can be properly generated by the impotent CommandBuilder. We've discussed the weakness of this class for some time. Frankly, this approach only works in the simplest of cases.
I include an entire chapter on the TableAdapter and many more of these issues.

Unless you add validation code these STDs are not really much "safer" as they don't support business rules. That is, while a column can be strongly typed as an integer, that does not prevent invalid integers from being entered into the database or prevent server-side rules and triggers from rejecting the data.

Does MS have plans to address these issues? Perhaps, but it seems they are totally focused on the new shiny tools--the DLINK technology that's consuming most of their resources. Can they fix these fundamental issues? Yes, but only to a limited extent.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
 
C

Cor Ligthert [MVP]

Bill,

I am glad you wrote this, you was included in the authors and therefore I hesitated to reply, however now I am glad because it made you even more possible to place this reaction.

There is not any word in your reaction that is in my idea wrong.

Cor

"William (Bill) Vaughn" <[email protected]> schreef in bericht Having been quoted out of context (many times), I must respond here.
The Strongly Typed Dataset (STD) is a Visual Studio-generated class that represents the data structure of a specific database table at design time. When it comes time to access this code you will find that it can be easier to code access the specific columns--once you learn how to do so. That's because each column in the table is exposed as a named property. You'll also find that accessing these columns in code is faster as they are strongly typed instead of being stored as objects that need conversion and require that you either use late binding (code to address the column is execute at runtime). This all sounds great. However, you can achieve most of the coding and speed benefits from column enumerations that are far easier to code and recode as the schema changes.

In addition, as I crystallize in my new book, the TableAdapter or STD Dataset is problematic. It assumes:
a.. The schema does not change during development. It is naive to assume that the data schema is fixed at the point where you're ready to freeze the structure into STDs. Changing it post-construction is not easy. In some cases it requires complete tear-down and reconstruction of the TableAdapter and if this is not done correctly, you'll crash the Form.
b.. The DBA exposes base tables. In my experience the majority of serious shops hide base tables and expose SPs and Views instead.
c.. The table CRUD routines can be properly generated by the impotent CommandBuilder. We've discussed the weakness of this class for some time. Frankly, this approach only works in the simplest of cases.
I include an entire chapter on the TableAdapter and many more of these issues.

Unless you add validation code these STDs are not really much "safer" as they don't support business rules. That is, while a column can be strongly typed as an integer, that does not prevent invalid integers from being entered into the database or prevent server-side rules and triggers from rejecting the data.

Does MS have plans to address these issues? Perhaps, but it seems they are totally focused on the new shiny tools--the DLINK technology that's consuming most of their resources. Can they fix these fundamental issues? Yes, but only to a limited extent.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
-----------------------------------------------------------------------------------------------------------------------
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top