1 Gig DataSet Limit Regardless of available physical memory?

G

Guest

Can someone tell me if there is a limit to the memory size of a dataset
running under framework 1.1 please?

The file size would normally be around 250-300MB but it baloons by 3-4 times
as it is in a dataset - this is fine, I understand this.

What I don't understand is why it just goes pop when it reaches 1 gig and
refuses to do anymore regardless of the fact that the virtual machine
allocates all 3 gig of available physical memory to this Virtual machine.
 
W

William \(Bill\) Vaughn

To avoid this problem you have to set the "Okay, I don't care about
performance" property to "True".

Ah, ADO.NET (and all of the data access interfaces) make terrible bulk
operations managers. Why are you bringing in so much data to the client?
Remember, ADO.NET is a "query" interface not a delivery truck.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
M

Marina Levit [MVP]

Haha! I like that one.

William (Bill) Vaughn said:
To avoid this problem you have to set the "Okay, I don't care about
performance" property to "True".

Ah, ADO.NET (and all of the data access interfaces) make terrible bulk
operations managers. Why are you bringing in so much data to the client?
Remember, ADO.NET is a "query" interface not a delivery truck.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no
rights.
__________________________________

"ifitwaseasyidbebored" <[email protected]>
wrote in message
 
C

Christian Schmidt

Hi William,
Ah, ADO.NET (and all of the data access interfaces) make terrible bulk
operations managers. Why are you bringing in so much data to the client?
Remember, ADO.NET is a "query" interface not a delivery truck.

To stay in your metapher: What could be used as a delivery truck? Piping
bulk copy?

Cheers,
 
C

Christian Schmidt

Sahil said:
BCP
SqlBulkCopy
DTS/SSIS

Cool! I didn't know SQL Server's "Bulk Insert <table> From <file>". Does
anyone know how to connect a client's stream to bulk insert?
 
W

William \(Bill\) Vaughn

Yup Sahil is right. That's were I was heading.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
W

William \(Bill\) Vaughn

Anything that can be exposed with an ODBC, OLE DB or .NET data provider can
be passed on (at high speed) to SQL Server via BCP, SqlClient.SqlBulkCopy or
SSIS/DTS. This includes flat files, delimited files and database extrusions
of all kinds.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
C

Christian Schmidt

Hi William,
Anything that can be exposed with an ODBC, OLE DB or .NET data provider can
be passed on (at high speed) to SQL Server via BCP, SqlClient.SqlBulkCopy or
SSIS/DTS. This includes flat files, delimited files and database extrusions
of all kinds.

So SqlDataReader is the truck for reading and SqlBulkCopy for writing?
 
W

William \(Bill\) Vaughn

The SqlDataReader is simply the pipe connecting your application to the
data--any data. In this case it's the 6" hose that fills the SqlBulkCopy
tanker truck (to belabor the analogy). When you use the UPDATE/INSERT
approach, you're loading your VW Beetle with a single row and sending it
over the mountains to be delivered at the server in Spokane. It has to go
there and come back with the results of the operation before you can send
another row (to beat the analogy to death).

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
C

Christian Schmidt

Hi William,
The SqlDataReader is simply the pipe connecting your application to the
data--any data. In this case it's the 6" hose that fills the SqlBulkCopy
tanker truck (to belabor the analogy). When you use the UPDATE/INSERT
approach, you're loading your VW Beetle with a single row and sending it
over the mountains to be delivered at the server in Spokane. It has to go
there and come back with the results of the operation before you can send
another row (to beat the analogy to death).

Think I got it ;-)

Thanks a lot!
 
I

irashkin

Ok, I totally agree that a dataset is not a database and should not be
treated as such. And so it is usually not a good idea to load a 1 GB
databse into memory.
However, I think my situation is different, although if someone can
think of a more elegant solution, I'd be happy to hear it - maybe I'm
just overlooking something obvious.
My windows application (winforms, vb.net) allows a user to point at any
odbc datasource, create a query and then operate on the results. Because
the datasource might be anything from SQL Server to text to Excel to
Sybase to Oracle and so on, I can't write any dynamic sql in code, at
least not anything more complex than a simple select. So if the user
wants to operate on 200,000 rows of a table and that works out to a 700
MB datatable, there's not much I can do besides handle it.
Once I have the table, I need to loop through each row and do some stuff
with it - doesn't really matter what.
One options is to make them create two queries - one a simple table of
primary key values, and the other a query which can define the rest of
the fields needed, and this can then be called as a new query one at a
time using "where [key] = [value from first table]". But this is not
only a lot of unnecessary db calls, it also has three pitfalls.
1. If the primary key is a compound, that is very hard to write generic
dynamic sql for.
2. If the user aliases their primary key (because it is a compound or
for whatever reason), there could be problems creating the dynamic query
(ok, this is worth throwing an error back at the user for…).
3. If the user has any other SQL AFTER their where clause in the dynamic
query, such as Having, Group By, etc., it gets ugly – it is
theoretically possible to look for all these keywords and dissect the
query, but not something I want to do!
I could do this in passes if I could limit the number of rows returned
in each pass, but again it’s a problem of generic SQL compatibility –
Top X versus Set RowCount X, etc.

So does anyone have ideas about how to handle this? Currently, if the
query is too big I end up with a SystemOutOfMemory exception.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top