Out of memory exception when manually building a table

G

Guest

Hi,
I am manipualting a set of tables in memory without any backend database.
I am building a join table from three other tables and I have no problems until I delete rows.
At that point, when I attempt to rebuild the join table, for every row added around 500K of RAM is allocated and nothing is ever garbage collected. However, if I "AcceptChanges" on my DataSet, then the problem goes away?!?
Again, I have no problems when inserting or updating rows. The join table is built without excess RAM being used.
It's only when delete a row that all of a sudden RAM is being allocated until I run out.

Has anyone experience this problem?
If I accept changes the problem goes away, but the fact is I would like to not have to accept changes since the user has not really accepted changes yet in our application.

Thanks,
Eric.
 
W

William Ryan eMVP

Hi Eric:

I haven't seen this problem per se, but remember that calling Delete doesn't
get rid of the row. The following assertion would pass:

int i = mydataTable.Rows.Count;

for(int x = 0; x < 100, x++){
mydataTable.Rows[0].Delete();
}
Debug.Assert(i == myDataTable.Rows.Count); //Would pass

Now, call AcceptChanges....

myDataTable.AcceptChanges();

Debug.Assert(i == myDataTable.Rows.Count); //Would fail

So, if the datatable isn't mapped back to a beck end db, Calling Delete
doesn't really have much of a purpose... you can use Remove which will
physically get rid of the rows. AcceptChanges merely changes RowStates of
the rows, and RowState is primarly of benefit within the context of Updating
a DB (although that's not to say there aren't any other applications).

I suspect somewhere along the road that the "Rebuilding" is causing a bunch
of extra rows to still be in place if they aren't removed and Delete isn't
going to remove them. Another Alternative is to call .AcceptChanges on the
individual Rows as you logically 'delete' them so you don't affect anything
else, only the specific row. You can call .AcceptChnages on the row as well
as the table, so it may make more sense to do it only on the row.

It'd be helpful to see the code you are working with though, as that might
provide more insight into the problem.

HTH,

Bill

www.devbuzz.com
www.knowdotnet.com

Eric said:
Hi,
I am manipualting a set of tables in memory without any backend database.
I am building a join table from three other tables and I have no problems until I delete rows.
At that point, when I attempt to rebuild the join table, for every row
added around 500K of RAM is allocated and nothing is ever garbage collected.
However, if I "AcceptChanges" on my DataSet, then the problem goes away?!?
Again, I have no problems when inserting or updating rows. The join table
is built without excess RAM being used.
It's only when delete a row that all of a sudden RAM is being allocated until I run out.

Has anyone experience this problem?
If I accept changes the problem goes away, but the fact is I would like to
not have to accept changes since the user has not really accepted changes
yet in our application.
 
W

William Ryan eMVP

Hi Eric:

Can you show me the Delete code that's causing the problem. I run delete
all the time all over the place and haven't seen this problem. I'm just
wondering if perhaps its not the Selects and finds as opposed to the
Deletes. That's a lot of references being built .

Let me know.

Bill

www.devbuzz.com
www.knowdotnet.com

Eric said:
Thanks for your reply Ryan.

You are right that the calling Delete() only marks it as deleted internally.
That's what I wanted until I got performance issues and subsequently an
out of memory exceptions (with 1 gig of RAM).
It puzzles me that the performance is good when inserting and updating but
then, when a row is deleted, memory is allocated for each row. It's not how
it behaves with updates and inserts. Below I have included the piece of code
for doing the join table.
I was wandering if it was not a bug in the ADO code...
Anyways, I appreciate anyone's comments. I have a work around with the expected performance.
It's just a little anyoing that I have to treat the row delete differently.

Thanks,
Eric.

_DetailViewTable.Table.Clear();
DataRow[] tableONERows = _TableONE.Table.Select(filterExpression, sortExpression);
foreach (DataRow row in tableONERows)
{
newRow = _DetailViewTable.Table.NewRow();

masterID = (int)row[(int)TableONE.ColumnID.MasterID];
filterExpression = string.Format("ID = {0}", masterID);
DataRow[] masterRows = _MasterRecordTable.Table.Select(filterExpression);
newRow[(int)DetailViewTable.ColumnID.ID] = masterRows[0][(int)MasterRecordTable.ColumnID.ID];

historyID = (int)row[(int)TableONE.ColumnID.HistoryID];
filterExpression = string.Format("ID = {0}", historyID);
DataRow[] historyRows = _HistoryTable.Table.Select(filterExpression);
newRow[(int)DetailViewTable.ColumnID.Date] = historyRows[0][(int)HistoryTable.ColumnID.Date];
_DetailViewTable.Table.Rows.Add(newRow);
}
 
G

Guest

Hi Ryan

My delete code is below. The thing is that no matter how many inserts or updates I do, building the join table offer somewhat constant performance and little allocation. Except when I delete a row using the code below. Then when it builds the join table, RAM usage slowly goes from 50 megs to 400 megs... My database in memory easily takes up 40 megs but to expend to 400 there seems to be a problem. If I AcceptChanges, the problem goes away

Again thanks for your inputs
Eric

DataRow[] masterRecords = _MasterTable.Table.Select(filterExpression)
foreach (DataRow row in masterRecords

// Get All histor
filterExpression = string.Format("MasterID = {0}", row[0][(int)MasterRecordTable.ColumnID.ID])
DataRow[] historyRecords = _HistoryTable.Table.Select(filterExpression)

// Delete All histor
foreach (DataRow historyRow in historyRecords

historyRow.Delete()



// Delete all Master Record
foreach (DataRow masterRow in masterRecords

masterRow.Delete()


===========================================================================
----- William Ryan eMVP wrote: ----

Hi Eric

Can you show me the Delete code that's causing the problem. I run delet
all the time all over the place and haven't seen this problem. I'm jus
wondering if perhaps its not the Selects and finds as opposed to th
Deletes. That's a lot of references being built

Let me know

Bil

www.devbuzz.co
www.knowdotnet.co

Eric said:
Thanks for your reply Ryan internally
That's what I wanted until I got performance issues and subsequently a
out of memory exceptions (with 1 gig of RAM)
It puzzles me that the performance is good when inserting and updating bu
then, when a row is deleted, memory is allocated for each row. It's not ho
it behaves with updates and inserts. Below I have included the piece of cod
for doing the join table
I was wandering if it was not a bug in the ADO code..
Anyways, I appreciate anyone's comments. I have a work around with th expected performance
It's just a little anyoing that I have to treat the row delet differently
Eric
_DetailViewTable.Table.Clear()
DataRow[] tableONERows = _TableONE.Table.Select(filterExpression sortExpression)
foreach (DataRow row in tableONERows

newRow = _DetailViewTable.Table.NewRow()
masterID = (int)row[(int)TableONE.ColumnID.MasterID]
filterExpression = string.Format("ID = {0}", masterID)
DataRow[] masterRows = _MasterRecordTable.Table.Select(filterExpression)
newRow[(int)DetailViewTable.ColumnID.ID] masterRows[0][(int)MasterRecordTable.ColumnID.ID]
historyID = (int)row[(int)TableONE.ColumnID.HistoryID]
filterExpression = string.Format("ID = {0}", historyID)
DataRow[] historyRows = _HistoryTable.Table.Select(filterExpression)
newRow[(int)DetailViewTable.ColumnID.Date] historyRows[0][(int)HistoryTable.ColumnID.Date]
_DetailViewTable.Table.Rows.Add(newRow)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top