M
Magnus
Im using the new binding features of Visual Studio 2005. I have done the
steps to create a bound data source, and selected all 40 tables from the
database. The wizard generated the necessary code for the dataset and can be
seen in the solution explorer under "db_exampledataset.xsd".
My worry is the size of the dataset definition, specifically that the
"db_exampledataset.Designer.cs" is huge, about 51000 lines of code, the
dataset class alone is over 27000 lines of code. And this must have an
impact on performance every time a dataset class is instantiated?
I wonder if there is some sort of strategy around this, is these generated
classes just a joke or is there something I am doing wrong? I thought about
creating several smaller datasets to fit the needs in different areas of the
program, but that seems like a bad idea because if you need to change the
tableadapter associated with one datatable, you will have make sure you also
make any necessary changes in other datasets where its being used, it can
get messy and cumbersome. But that is the only viable solution I have to
this problem right now.
It would be great if you could create lighter datasets that just reference
objects in a master dataset. That way you only have to make changes one
place, and the dataset objects would be smaller.
I also wonder if there is a way, or, what is a good way to measure the
overall performance impact when instantiating and using the dataset object.
- Magnus
steps to create a bound data source, and selected all 40 tables from the
database. The wizard generated the necessary code for the dataset and can be
seen in the solution explorer under "db_exampledataset.xsd".
My worry is the size of the dataset definition, specifically that the
"db_exampledataset.Designer.cs" is huge, about 51000 lines of code, the
dataset class alone is over 27000 lines of code. And this must have an
impact on performance every time a dataset class is instantiated?
I wonder if there is some sort of strategy around this, is these generated
classes just a joke or is there something I am doing wrong? I thought about
creating several smaller datasets to fit the needs in different areas of the
program, but that seems like a bad idea because if you need to change the
tableadapter associated with one datatable, you will have make sure you also
make any necessary changes in other datasets where its being used, it can
get messy and cumbersome. But that is the only viable solution I have to
this problem right now.
It would be great if you could create lighter datasets that just reference
objects in a master dataset. That way you only have to make changes one
place, and the dataset objects would be smaller.
I also wonder if there is a way, or, what is a good way to measure the
overall performance impact when instantiating and using the dataset object.
- Magnus