ceative suggestions wanted

N

nlbauers

Hi all,

I am looking for suggestions for replacing a very large 2D double
array in my application with some kind of out of RAM storage. The
data stored in the array will soon be growing so large that it will
impact application performance by sucking up most of the system RAM.
We expect the existing design will soon require an array of over 800MB
of space and future requirements will need even more. The big issue
we have is access time- the solution should minimize impact on the
application execution. Currently the array is written to once at
application startup, it is then read on a single element basis
(meaning elements are picked out here and there, not looping over
large portions of the array) fairly often during normal application
execution, and the values are updated infrequently. That means that
single element access is the thing we are the most concerned about
dragging down application performance. So, here are the current
suggestions:

1) Replace the in-memory array with a database. The plus side here is
that in-memory size wont be an issue. The down side here, is that
accessing single records/elements (a fairly common operation) would be
an order or magnitude more expensive. My testing indicates 500x
slower to access a single database record.

2) Replace the in-memory array with a binary file. My testing
indicates that single element file access would be slightly faster
than a database query, but still an order of magnitude slower than in-
memory array access.

3) A buffered hybrid storing the most commonly accessed elements in-
memory and paying the performance penalty when retrieving those that
are not used as often. This is the solution we are leaning toward.

Anyone else have a clever suggestion for solving this problem? Seems
to me this problem must have been solved before.

The solution will be implemented in C# or C++ if it matters.

Neil
 
I

Ignacio Machin \( .NET/ C# MVP \)

Hi,


nlbauers said:
1) Replace the in-memory array with a database. The plus side here is
that in-memory size wont be an issue. The down side here, is that
accessing single records/elements (a fairly common operation) would be
an order or magnitude more expensive. My testing indicates 500x
slower to access a single database record.

You can cache part of the data you work with

Additionally you could simply move the processing to the DB, in SQL 05 you
can move .NET code to the DB itself.
 
N

nlbauers

Thank you very much for the suggestions. It looks like Memory Mapped
files are going to work for us.

Neil
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top