Same question - Why use a DataTable in ASP .NET?

J

jehugaleahsa

Hello:

What is the point of using a DataTable in ASP .NET? We are unsure how
you can use them without 1) rebuilding them every postback, or 2)
taking up precious memory. We are not sure how to store a DataTable in
any other way outside of our servers. In doing so, we leave ourselves
open to large memory requirements. Furthermore, most web pages do not
really support multiple changes per transaction. In other words, when
the user submits a form, they are ready to move forward. All the
changes are submitted for one entity and that is what gets sent to the
database.

The ultimate would be to emulate Windows Forms as much as possible.
When a field is changed on the client, the change will modify a client-
side copy of the DataTable. When the user saves, that DataTable is
then sent back to the web server to be committed to the database.
However, as far as I know, this isn't the intention and so isn't easy
to implement.

I am looking for someone to give me a reason to use a DataTable in an
ASP .NET application. I know they make it easy to do sorting and
filtering. I have no idea how this works. I am not sure whether it
hits the database again. There is such a large community of developers
out there who use DataTables almost exclusively, so I hope someone can
elighten us.

Thanks,
Travis
 
I

Ignacio Machin ( .NET/ C# MVP )

Hello:

What is the point of using a DataTable in ASP .NET?

you need to keep the data somewhere, a datatable is a good option
We are unsure how
you can use them without 1) rebuilding them every postback,

keep it in session, or if it's not big in the viewstate
taking up precious memory. We are not sure how to store a DataTable in
any other way outside of our servers. In doing so, we leave ourselves
open to large memory requirements. Furthermore, most web pages do not
really support multiple changes per transaction. In other words, when
the user submits a form, they are ready to move forward. All the
changes are submitted for one entity and that is what gets sent to the
database.

Then do not keep it at all
The ultimate would be to emulate Windows Forms as much as possible.
When a field is changed on the client, the change will modify a client-
side copy of the DataTable

There is no such a thing like a client side table :(
Web prog. is different than win, take that in mind when making the
transicion
 
J

Jordan S.

Responses inline:
What is the point of using a DataTable in ASP .NET?

They are considered as "safer" than DataReaders in many cases. So where you
really need read-only access to the data, some developers prefer DataTables
over DataReaders because if something goes wrong with a DataReader, then it
is possible that the DataReader's connection to the database would not be
closed - thereby leaking connections to the database. DataTables can hang
around in memory (server-side, of course) with no connection to the
database. So if something goes awry while reading the table, no connections
to worry about. But this is just one reason to use DataTables in ASP.NET.
We are unsure how
you can use them without 1) rebuilding them every postback,

You can store them in Session state - if the DataTable content is relevant
to one particular session of the ASP.NET application. Or you can store the
DataTable in the system Cache if the data table content is not related to
any one particular session. An alternative to storing data in the Cache is
to store it in Application state, but that is discouraged for a number of
reasons that I won't enumerate here.

Remember, "the system" (ASP.NET itslef) will purge data from the Cache if
system resources are becoming scarce, so your logic can't assume that the
DataTable is still in the Cache.

or 2) taking up precious memory.
"Precious memory" ... you sound as if you are visiting 2008 (where memory is
pennies per MB) from the distant past where it was hundreds oof dollars per
MB).

But I understand your point - you don't want to waste memory needlessly
regardless. This becomes an architecture question that is easily solved.
Just because you may have made it possible in your desktop apps for your
users to edit a list of 50 million rows doesn't mean that was the best
alternative, even though "it worked". Regardless of web vs. deskto app, you
shouldn't be giving your users enormous amounts of data they can edit at
once. So if you properly design your app (same principles apply desktop vs
web apps), then you'll give your users only what they actually need, and
then no more.

If your concerns are for concurrency issues, then you still need to hit the
database anyway (at least in a truly disconnected architecture).

Finally, remember that if you store data in the session or cache, the system
will eventually purge it for you so you won't be leaking memory by using
Session or Cache.

We are not sure how to store a DataTable in
any other way outside of our servers.

You really don't need to, and arguably should not. Again, just because
that's how "it worked" in a desktop doesn't mean that is how it can or
should in a Web app. But let's say you don't like what I just wrote, you can
get the same result by storing the data in session state. That's what would
be the "equivalent" of storing it in your desktop app, on the client.

In doing so, we leave ourselves
open to large memory requirements.

Disagreed.... at least not in a properly designed application in which
you're giving the users only what they need an no more.
Furthermore, most web pages do not
really support multiple changes per transaction.

Right - so this *decreases* your memory requirements per session. If you
have just a few values per page and don't want to take up *any* server-side
resources, then you can store the values in the ViewState.
In other words, when
the user submits a form, they are ready to move forward. All the
changes are submitted for one entity and that is what gets sent to the
database.

Okay - so you just described exactly why you don't have "huge memory
requriements" on the server. Cache the DataTable, do your round tripping to
the client, save your transactions, and then you can either clear the cache
programmatically or you can let ASP.NET do it when it detects that resources
are at risk of running low.
The ultimate would be to emulate Windows Forms as much as possible.

This can be done. But it requires you to think more carefully about what
Windows Forms is actually doing behind the scenes and where you'd emulate
that server-side (or in ViewState).
When a field is changed on the client, the change will modify a client-
side copy of the DataTable.

In your thinking, then, just replace "client-side copy" with
"session-state-stored copy".
When the user saves, that DataTable is
then sent back to the web server to be committed to the database.

In ASP.NET, the page goes to the server, your logic then updates the
DataTable... which is passed to the DataAdapter's Update method...
However, as far as I know, this isn't the intention and so isn't easy
to implement.

Not necessarily true.
I am looking for someone to give me a reason to use a DataTable in an
ASP .NET application.

This is a guess, and I may be wrong, but it seems like you are expecting a
DataTable to be a direct analog of the old-school DataSet. They aren't, and
they take up much less memory than you apparently think.
I know they make it easy to do sorting and
filtering.
No, they don't - not by themselves. You need a DataView for that.
I have no idea how this works. I am not sure whether it
hits the database again.

No - ADO.NET DataTables (and DataSets) are inherently disconnected from
their data source. They in fact have no idea of where their data came from
and cannot possibly care.
There is such a large community of developers
out there who use DataTables almost exclusively, so I hope someone can
elighten us.

For more on the specific use of DataTables in ASP.NET web applications, you
can post to:
microsoft.public.dotnet.framework.aspnet
Thanks,
Travis

Finally, consider that your question about "DataTables" really could be
asked of *any* variable... "how do you keep a variable's value between
postbacks" in any Web application is the underlying question. The answer is
the same for a DataTable as it is for any other variable. Your decisions
will center around the best place to store it. Your alternatives are
basically (1) ViewState; (2) Session state; (3) system Cache; and (4)
application state.

-HTH
 
J

jehugaleahsa

Responses inline:


They are considered as "safer" than DataReaders in many cases. So where you
really need read-only access to the data, some developers prefer DataTables
over DataReaders because if something goes wrong with a DataReader, then it
is possible that the DataReader's connection to the database would not be
closed - thereby leaking connections to the database. DataTables can hang
around in memory (server-side, of course) with no connection to the
database. So if something goes awry while reading the table, no connections
to worry about. But this is just one reason to use DataTables in ASP.NET.


You can store them in Session state - if the DataTable content is relevant
to one particular session of the ASP.NET application. Or you can store the
DataTable in the system Cache if the data table content is not related to
any one particular session. An alternative to storing data in the Cache is
to store it in Application state, but that is discouraged for a number of
reasons that I won't enumerate here.

Imagine you are pulling out about 20 DataTables per user in an
application. Even if I pull out one record per user, that is 20
records overall. If I have 30 users, then I have 20*30 records. More
realistically, I will have a lot more records active per session. If
our applications have any major loads, it will be a lot more than 30
users. When a good deal of memory is consumed, the web server starts
paging a lot more. We might even need to get a database server to
handle our session data. For small web applications that don't get
many hits, it is fine to use DataTables willy nilly. When you write
software that is meant to work for enterprise applications that may
have millions of hits a day, they are a huge risk.

Remember, "the system" (ASP.NET itslef) will purge data from the Cache if
system resources are becoming scarce, so your logic can't assume that the
DataTable is still in the Cache.


"Precious memory" ... you sound as if you are visiting 2008 (where memory is
pennies per MB) from the distant past where it was hundreds oof dollars per
MB).

Look at my reasoning above. I don't want to write software assuming
that it will only be used in an environment with a 100 users. There is
a big difference being able to afford memory and having a system
capable of supporting it. There is little point in 10TB of hard drive
space if the operating system can only manage 4GB page files. No
operating system I know of can support more than a few GB of RAM. In
other words, a database would have to be the back-end and that can be
costly in terms of licencing and support.

All I am saying is that in large-scale web applications, the better
you can manage to reduce memory requirements, the shorter you can keep
memory alive on the server, the better you will be. We have a rule
where I work - "Don't store anything you can retrieve otherwise." In
other words, if you have a primary key, store it in the session, not
the record itself. If you have a parent record foreign key, store it,
not the child records. And so forth...
But I understand your point - you don't want to waste memory needlessly
regardless. This becomes an architecture question that is easily solved.
Just because you may have made it possible in your desktop apps for your
users to edit a list of 50 million rows doesn't mean that was the best
alternative, even though "it worked". Regardless of web vs. deskto app, you
shouldn't be giving your users enormous amounts of data they can edit at
once. So if you properly design your app (same principles apply desktop vs
web apps), then you'll give your users only what they actually need, and
then no more.

I agree with what you are saying. There will always be drill downs. In
90% of our cases, the user only cares about a subset of the data. It
just turns out that some times a subset is still rather bulky. Even
with paging you run into issues. We have found that the paging feature
is hard to use correctly. It is usually easier for us to just rehit
the database each time. This means more records are retrieved than
needed, which is bad but hard to avoid.
If your concerns are for concurrency issues, then you still need to hit the
database anyway (at least in a truly disconnected architecture).

Finally, remember that if you store data in the session or cache, the system
will eventually purge it for you so you won't be leaking memory by using
Session or Cache.


You really don't need to, and arguably should not. Again, just because
that's how "it worked" in a desktop doesn't mean that is how it can or
should in a Web app. But let's say you don't like what I just wrote, you can
get the same result by storing the data in session state. That's what would
be the "equivalent" of storing it in your desktop app, on the client.

The difference with client applications is that they are client
applications. They don't have the memory issues that a centralized web
application has. It is okay to store something on the client because
it is their memory and it is cheap.
Disagreed.... at least not in a properly designed application in which
you're giving the users only what they need an no more.

Think BIG. We have found that storing an integer for one or more
records tends to cause us no issues. However, when we start storing
more than that we see huge performance hits during high load hours.
Hence our saying above. In other words, doing a little more processing
and hitting the database more often usually improves our performance
(depending on the situation). For read-only data we pretend it never
existed.
Right - so this *decreases* your memory requirements per session. If you
have just a few values per page and don't want to take up *any* server-side
resources, then you can store the values in the ViewState.

The ViewState? A bulky ViewState can result in enormous pages that can
take literally minutes to transfer over a 56Kb. Even if you go out to
a lot of forum sites today, you can see page delays even on cable. Web
page designers take for granted that their pages take a lot of memory.
Fancy CSS and gradiants can actually annoy users with low bandwidth.
Commerce sites can lose a lot of business if the customer has to wait
more than a few seconds! Imagine making a high load web server
generate web pages that contain 1KB of ViewState. 1KB / 56Kb = 1
second. If you have an enormous level of users with slow bandwidth,
that 1KB sits on your network waiting to get sent. If you kill the
connection, you lose business. If you don't kill it, it kills your
bandwidth and your customers can't get in or out anyway. The trick to
high load web pages is to limit what gets passed around as much as
possible. We find that it is okay to make pages pretty if the images
are cached on the client and are optimized as much as possible. We are
okay with CSS, 99% of the time. We are okay with ViewState for storing
editable control data.
Okay - so you just described exactly why you don't have "huge memory
requriements" on the server. Cache the DataTable, do your round tripping to
the client, save your transactions, and then you can either clear the cache
programmatically or you can let ASP.NET do it when it detects that resources
are at risk of running low.


This can be done. But it requires you to think more carefully about what
Windows Forms is actually doing behind the scenes and where you'd emulate
that server-side (or in ViewState).


In your thinking, then, just replace "client-side copy" with
"session-state-stored copy".

This is not the same thing. Client memory, server memory. Big
difference.
In ASP.NET, the page goes to the server, your logic then updates the
DataTable... which is passed to the DataAdapter's Update method...


Not necessarily true.


This is a guess, and I may be wrong, but it seems like you are expecting a
DataTable to be a direct analog of the old-school DataSet. They aren't, and
they take up much less memory than you apparently think.

Never worked with the old-school DataSet.
No, they don't - not by themselves. You need a DataView for that.
Obviously.


No - ADO.NET DataTables (and DataSets) are inherently disconnected from
their data source. They in fact have no idea of where their data came from
and cannot possibly care.

Sorry, I didn't mean database, I meant web server.
For more on the specific use of DataTables in ASP.NET web applications, you
can post to:
microsoft.public.dotnet.framework.aspnet


Finally, consider that your question about "DataTables" really could be
asked of *any* variable... "how do you keep a variable's value between
postbacks" in any Web application is the underlying question. The answer is
the same for a DataTable as it is for any other variable. Your decisions
will center around the best place to store it. Your alternatives are
basically (1) ViewState; (2) Session state; (3) system Cache; and (4)
application state.

-HTH

You forgot another option (5) not at all. I don't think these are the
same question. There are definite differences between storing an
integer and storing a 500B DataTable. I guess I pulling from my
experience making high load web applications. I have also made low
load applications. Unfortunately, once you start thinking "high load"
it is hard to ever go back.
 
C

Cor Ligthert[MVP]

And what is your alternative, a datatable are just some references with
objects in items from datarows which holds data.

You have a cheaper way to make it persistent?

Cor

<[email protected]> schreef in bericht
Responses inline:


They are considered as "safer" than DataReaders in many cases. So where
you
really need read-only access to the data, some developers prefer
DataTables
over DataReaders because if something goes wrong with a DataReader, then
it
is possible that the DataReader's connection to the database would not be
closed - thereby leaking connections to the database. DataTables can hang
around in memory (server-side, of course) with no connection to the
database. So if something goes awry while reading the table, no
connections
to worry about. But this is just one reason to use DataTables in ASP.NET.


You can store them in Session state - if the DataTable content is relevant
to one particular session of the ASP.NET application. Or you can store the
DataTable in the system Cache if the data table content is not related to
any one particular session. An alternative to storing data in the Cache is
to store it in Application state, but that is discouraged for a number of
reasons that I won't enumerate here.

Imagine you are pulling out about 20 DataTables per user in an
application. Even if I pull out one record per user, that is 20
records overall. If I have 30 users, then I have 20*30 records. More
realistically, I will have a lot more records active per session. If
our applications have any major loads, it will be a lot more than 30
users. When a good deal of memory is consumed, the web server starts
paging a lot more. We might even need to get a database server to
handle our session data. For small web applications that don't get
many hits, it is fine to use DataTables willy nilly. When you write
software that is meant to work for enterprise applications that may
have millions of hits a day, they are a huge risk.

Remember, "the system" (ASP.NET itslef) will purge data from the Cache if
system resources are becoming scarce, so your logic can't assume that the
DataTable is still in the Cache.


"Precious memory" ... you sound as if you are visiting 2008 (where memory
is
pennies per MB) from the distant past where it was hundreds oof dollars
per
MB).

Look at my reasoning above. I don't want to write software assuming
that it will only be used in an environment with a 100 users. There is
a big difference being able to afford memory and having a system
capable of supporting it. There is little point in 10TB of hard drive
space if the operating system can only manage 4GB page files. No
operating system I know of can support more than a few GB of RAM. In
other words, a database would have to be the back-end and that can be
costly in terms of licencing and support.

All I am saying is that in large-scale web applications, the better
you can manage to reduce memory requirements, the shorter you can keep
memory alive on the server, the better you will be. We have a rule
where I work - "Don't store anything you can retrieve otherwise." In
other words, if you have a primary key, store it in the session, not
the record itself. If you have a parent record foreign key, store it,
not the child records. And so forth...
But I understand your point - you don't want to waste memory needlessly
regardless. This becomes an architecture question that is easily solved.
Just because you may have made it possible in your desktop apps for your
users to edit a list of 50 million rows doesn't mean that was the best
alternative, even though "it worked". Regardless of web vs. deskto app,
you
shouldn't be giving your users enormous amounts of data they can edit at
once. So if you properly design your app (same principles apply desktop vs
web apps), then you'll give your users only what they actually need, and
then no more.

I agree with what you are saying. There will always be drill downs. In
90% of our cases, the user only cares about a subset of the data. It
just turns out that some times a subset is still rather bulky. Even
with paging you run into issues. We have found that the paging feature
is hard to use correctly. It is usually easier for us to just rehit
the database each time. This means more records are retrieved than
needed, which is bad but hard to avoid.
If your concerns are for concurrency issues, then you still need to hit
the
database anyway (at least in a truly disconnected architecture).

Finally, remember that if you store data in the session or cache, the
system
will eventually purge it for you so you won't be leaking memory by using
Session or Cache.


You really don't need to, and arguably should not. Again, just because
that's how "it worked" in a desktop doesn't mean that is how it can or
should in a Web app. But let's say you don't like what I just wrote, you
can
get the same result by storing the data in session state. That's what
would
be the "equivalent" of storing it in your desktop app, on the client.

The difference with client applications is that they are client
applications. They don't have the memory issues that a centralized web
application has. It is okay to store something on the client because
it is their memory and it is cheap.
Disagreed.... at least not in a properly designed application in which
you're giving the users only what they need an no more.

Think BIG. We have found that storing an integer for one or more
records tends to cause us no issues. However, when we start storing
more than that we see huge performance hits during high load hours.
Hence our saying above. In other words, doing a little more processing
and hitting the database more often usually improves our performance
(depending on the situation). For read-only data we pretend it never
existed.
Right - so this *decreases* your memory requirements per session. If you
have just a few values per page and don't want to take up *any*
server-side
resources, then you can store the values in the ViewState.

The ViewState? A bulky ViewState can result in enormous pages that can
take literally minutes to transfer over a 56Kb. Even if you go out to
a lot of forum sites today, you can see page delays even on cable. Web
page designers take for granted that their pages take a lot of memory.
Fancy CSS and gradiants can actually annoy users with low bandwidth.
Commerce sites can lose a lot of business if the customer has to wait
more than a few seconds! Imagine making a high load web server
generate web pages that contain 1KB of ViewState. 1KB / 56Kb = 1
second. If you have an enormous level of users with slow bandwidth,
that 1KB sits on your network waiting to get sent. If you kill the
connection, you lose business. If you don't kill it, it kills your
bandwidth and your customers can't get in or out anyway. The trick to
high load web pages is to limit what gets passed around as much as
possible. We find that it is okay to make pages pretty if the images
are cached on the client and are optimized as much as possible. We are
okay with CSS, 99% of the time. We are okay with ViewState for storing
editable control data.
Okay - so you just described exactly why you don't have "huge memory
requriements" on the server. Cache the DataTable, do your round tripping
to
the client, save your transactions, and then you can either clear the
cache
programmatically or you can let ASP.NET do it when it detects that
resources
are at risk of running low.


This can be done. But it requires you to think more carefully about what
Windows Forms is actually doing behind the scenes and where you'd emulate
that server-side (or in ViewState).


In your thinking, then, just replace "client-side copy" with
"session-state-stored copy".

This is not the same thing. Client memory, server memory. Big
difference.
In ASP.NET, the page goes to the server, your logic then updates the
DataTable... which is passed to the DataAdapter's Update method...


Not necessarily true.


This is a guess, and I may be wrong, but it seems like you are expecting a
DataTable to be a direct analog of the old-school DataSet. They aren't,
and
they take up much less memory than you apparently think.

Never worked with the old-school DataSet.
No, they don't - not by themselves. You need a DataView for that.
Obviously.


No - ADO.NET DataTables (and DataSets) are inherently disconnected from
their data source. They in fact have no idea of where their data came from
and cannot possibly care.

Sorry, I didn't mean database, I meant web server.
For more on the specific use of DataTables in ASP.NET web applications,
you
can post to:
microsoft.public.dotnet.framework.aspnet


Finally, consider that your question about "DataTables" really could be
asked of *any* variable... "how do you keep a variable's value between
postbacks" in any Web application is the underlying question. The answer
is
the same for a DataTable as it is for any other variable. Your decisions
will center around the best place to store it. Your alternatives are
basically (1) ViewState; (2) Session state; (3) system Cache; and (4)
application state.

-HTH

You forgot another option (5) not at all. I don't think these are the
same question. There are definite differences between storing an
integer and storing a 500B DataTable. I guess I pulling from my
experience making high load web applications. I have also made low
load applications. Unfortunately, once you start thinking "high load"
it is hard to ever go back.
 
A

Arne Vajhøj

Imagine you are pulling out about 20 DataTables per user in an
application. Even if I pull out one record per user, that is 20
records overall. If I have 30 users, then I have 20*30 records. More
realistically, I will have a lot more records active per session. If
our applications have any major loads, it will be a lot more than 30
users. When a good deal of memory is consumed, the web server starts
paging a lot more. We might even need to get a database server to
handle our session data. For small web applications that don't get
many hits, it is fine to use DataTables willy nilly. When you write
software that is meant to work for enterprise applications that may
have millions of hits a day, they are a huge risk.
You forgot another option (5) not at all. I don't think these are the
same question. There are definite differences between storing an
integer and storing a 500B DataTable. I guess I pulling from my
experience making high load web applications. I have also made low
load applications. Unfortunately, once you start thinking "high load"
it is hard to ever go back.

There is a small company that stores a bit of data and have a few
hits per days that allegedly stores all data in memory.

It's name is Google.
Look at my reasoning above. I don't want to write software assuming
that it will only be used in an environment with a 100 users. There is
a big difference being able to afford memory and having a system
capable of supporting it. There is little point in 10TB of hard drive
space if the operating system can only manage 4GB page files. No
operating system I know of can support more than a few GB of RAM. In
other words, a database would have to be the back-end and that can be
costly in terms of licencing and support.

"a few GB" ??

A p595 can take 4TB.

OK - that will not run ASP.NET, but you can get a SuperDome with 1 TB
that will run Windows.
All I am saying is that in large-scale web applications, the better
you can manage to reduce memory requirements, the shorter you can keep
memory alive on the server, the better you will be. We have a rule
where I work - "Don't store anything you can retrieve otherwise." In
other words, if you have a primary key, store it in the session, not
the record itself. If you have a parent record foreign key, store it,
not the child records. And so forth...

Other find that adding more memory and caching is providing much
more bang for the buck.

Obviously the usage pattern has some impact on the optimal decision.

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top