Memory Limit for Visual Studio 2005???

W

Willy Denoyette [MVP]

Peter Olcott said:
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uint> Temp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it runs out of
actual RAM when it doubles the size on the next reallocation. Unlike the native code
compiler, the managed code compiler must have actual RAM, virtual memory will not work.

3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that means that in
your sample you are trying to allocate 4GB!. It's obvious that this will fail.

And as I said in another reply, you have to pre-allocate the List, otherwise you'll need
much more free CONTIGUOUS memory than 1GB.
A list that isn't pre-allocated, starts with a 32 byte array as back-end store, this array
is extended each time it overflows. Extending means reserving another blob from the heap
with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands, but that means
also that a new blob of 1GB must be found before the old block can be copied to the new
blob (List).

This should probably work...
List<byte> Temp = new List<byte>(1024*1024*1024);

Willy.
 
W

Willy Denoyette [MVP]

Peter Olcott said:
Yes, there is just recently. This saves old C++ programmers like me alot of learning curve
switching to .NET.

No, there is not, the managed "std" like templates are currently in a "closed" pre-beta
stage and are scheduled for public release after the ORCAS release (another wait for at
least one year), and will not belong to the std namespace, that is they will not be named
std::vector.

Willy.
 
P

Peter Olcott

Willy Denoyette said:
3FFFFFFF is 1GB!!!! More, your List hold uint's that is 4 byes per entry, that
means that in your sample you are trying to allocate 4GB!. It's obvious that
this will fail.

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.
And as I said in another reply, you have to pre-allocate the List, otherwise
you'll need much more free CONTIGUOUS memory than 1GB.

It should not be much more it should be exactly twice as much. Native code can
handle this using virtual memory. Is it that .NET is not as good at using
virtual memory? By using virtual memory the reallocation doubling swaps out to
disk, thus is able to use all of actual RAM.
A list that isn't pre-allocated, starts with a 32 byte array as back-end
store, this array is extended each time it overflows. Extending means
reserving another blob from the heap with a new size = (original size * 2.
The finale result is that a 512MB List will extend to 1GB when it expands, but
that means also that a new blob of 1GB must be found before the old block can
be copied to the new blob (List).

This should probably work...
List<byte> Temp = new List<byte>(1024*1024*1024);

I am specifically testing the difficult (and common) case where one does not
know how much memory is needed in advance. The performance of this aspect of
..NET determines whether or not .NET is currently feasible for my application.
 
W

Willy Denoyette [MVP]

P

Peter Olcott

Willy Denoyette said:
No, there is not, the managed "std" like templates are currently in a "closed"
pre-beta stage and are scheduled for public release after the ORCAS release
(another wait for at least one year), and will not belong to the std
namespace, that is they will not be named std::vector.

Willy.

http://blogs.msdn.com/vcblog/archive/2006/09/30/777835.aspx
It looks like you were right, yet I then wonder why the std::vector compiled?
Did it mix native code and managed code together in the same function? (I had a
std::vector, and a Generic.List in the same function).
 
W

Willy Denoyette [MVP]

Peter Olcott said:
I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.

And it works for me!
It should not be much more it should be exactly twice as much. Native code can handle this
using virtual memory. Is it that .NET is not as good at using virtual memory? By using
virtual memory the reallocation doubling swaps out to disk, thus is able to use all of
actual RAM.
No it can't, not for managed code nor for native code, because the 2GB space per process
must be shared between code and data, the code are the executable module(s) and all it's
dependent modules (DLL's) which are loaded when the process starts. The result is that you
don't have a contiguous area of 2GB for the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which fragments the
heap in such a way that the largest FREE area of CONTIGUOUS memory is much smaller than 2GB,
possibly smaller than 1GB. Memory allocation patterns may even further fragment the heap in
such a way that even trying to allocate a 1MB buffer will throw an OOM. And this all has
nothing to do with .NET, there is no such thing like a .NET process, at run-time there is no
such thing like a native or .NET process, only difference between .NET and native is that
the memory footprint is somewhat larger at process start-up because of the .NET run-time and
it's libraries, but this is less than 10MB and is just overhead taken once.

I am specifically testing the difficult (and common) case where one does not know how much
memory is needed in advance. The performance of this aspect of .NET determines whether or
not .NET is currently feasible for my application.

But you know at least whether or not you'll need 100Kb, 1MB or 500MB or maybe 1GB, isn't it?
If you think you'll need > 500MB, just start pre-allocating 550 or 600 whatever and you are
done, until this get's filled completely and throws an OOM. Note that this is true for
native std::vector as well, both vector and List are "self expanding" by "copying", that
means that you should always be prepared to get OOM's when you are allocating such huge
objects in a 32 bit process.

Willy.
 
W

Willy Denoyette [MVP]

Peter Olcott said:
http://blogs.msdn.com/vcblog/archive/2006/09/30/777835.aspx
It looks like you were right, yet I then wonder why the std::vector compiled? Did it mix
native code and managed code together in the same function? (I had a std::vector, and a
Generic.List in the same function).


Yes, this is because you compiled with the clr option, this means mixed mode! the
std::vector template will compile as a native class .Try to compile in pure managed mode
/clr:safe and it will fail.

Willy.
 
C

Chris Mullins

Would you think that 18,000 hours would be enough thought? That's how much
I have into it.

Well, if the best solution you can come up with requires allocating 1GB
chunks, and you're woking on a computer that has only 1GB of physical
memory, I would say you're in some trouble.

At the very least, if you're dealing with memory chunks of that size, you
really need to be in x64 or IA64 land working on machines with signifigantly
more memory.
 
C

Chris Mullins

That's not one 1 gig.

That's 1GB * 4 bytes per int, for a total of 4 gigs. This is never, ever,
going to run on an x86 system.

--
Chris Mullins

Peter Olcott said:
Try this simpler case:
uint SIZE = 0x3FFFFFFF; // 1024 MB

List<uint> Temp;
for (uint N = 0; N < SIZE; N++)
Temp.Add(N);

Mine now bombs out on just short of half my memory. I am guessing that it
runs out of actual RAM when it doubles the size on the next reallocation.
Unlike the native code compiler, the managed code compiler must have
actual RAM, virtual memory will not work.

Chris Mullins said:
Peter Olcott Wrote:

PO> It looks like System::Collections::Generic.List throws and
PO> OUT_OF_MEMORY exception whenever memory allocated exceeds 256 MB. I
PO> have 1024 MB on my system so I am not even out of physical RAM, much
PO> less virtual memory.

I was curious if there was a limt there, so I wrote this:

private void button1_Click(object sender, EventArgs e)
{
int bytesPerArray = 1024 * 32; // 32k per byte. No large object heap
interaction.
long totalBytes = 0;
List<byte[]> bytes = new List<byte[]>();
while (true)
{
byte[] b = new byte[bytesPerArray];
bytes.Add(b);
totalBytes += bytesPerArray;
System.Diagnostics.Debug.WriteLine("Total: " +
totalBytes.ToString());
}
}

I compiled this as an x86 application, and ran it.

In my output window, the last things in there was:
Total: 1704427520
A first chance exception of type 'System.OutOfMemoryException'
occurred in WindowsApplication2.exe

This is exactly what I expected to see, as I know that each 32-bit
Windows Process gets 4GB of virual memory space, and that 4GB is split in
half, giving 2GB to user code to play with. The managed heap can
typically (if it's not fragmented) get up to 1.5-1.7 gigabytes before
issues arise.

If I compile this for x64 (or leave it as Any CPU), the limites are much
higher.

My machine does have 4GB of memory on it, but I've seen these exact same
results on machines with far less memory. (I build very big, very
scalable, applications all day long... and running into memory
limitiations in 32-bit land was a big problem I had for years).
 
C

Chris Mullins

I forgot to divide by four, make that 0xFFFFFFF; // One MB of uint
This should work on a system with 2GB of RAM.

That's still not likley to work. Expecting to be able to grow the heap each
time, in chunks that big, is likley to fail.

When the grow happens, and you're at 1.2GB, it's going to try to allocate
2.4GB - obviously it can't get this much memory, and it goes boom.

With your code, I get the OOM at:
? SIZE
268435455

At this point, your algorithm is, I would have to say, deeply flawed. It's
not at all practical to allocate memory in chunks of that size on an x86
system. This isn't even a .Net issue - it's a "your process gets 2GB of
memory. Into there you're loading all your stuff. You get what's left after
..Net has initialized, and your DLL's are loaded and jitted.".

I can't imagine C or C++ can do this a whole lot better. In fact, from what
I remember of how their heaps works it's just luck if it works there at all.
 
C

Chris Mullins

I'm running on WinXP64, which (if I remember right) plays some weird games
with WOW for x86 applications.

I seem to remember that it moves a number of things up into "higher" memory
segments to give more space to user applications. I could be misremembering
things here though, as it's not an area I've paid much attention to.

--
Chris Mullins

Willy Denoyette said:
Chris Mullins said:
Peter Olcott said:
Try and add one gig of Bytes to a List<Byte> and see if it doesn't
abnormally terminate.

Well, I did just that (got to 1.7GB) and it ran fine as an x86
application. I was allocating 32KB byte chunks though.

This is a VERY different use case from allocating a single 1GB buffer. I
tried to allocate a single 1GB array:
private void button2_Click(object sender, EventArgs e)
{
List<byte[]> bb = new List<byte[]>();
int GB = 1024 * 1024 * 1024;
byte[] b = new byte[GB];
bb.Add(b);
MessageBox.Show("Allocated 1 GB");
}

... and this immediatly threw an OOM when I compiled it under x86.
It did run perfectly (and instantly) when compiled and run under x64.

Not respective of platform, if you're really allocating memory in chunks
this big, you need to rethink your algorithm.

Even in x64 land, holding onto chunks in the heap this big would be
scary.


Windows forms needs some native dll's that are loaded by the OS loader at
an address which further fragments the process heap, and because the GC
heap allocates from the process heap, it cannot allocate a contiguous area
larger than the largest process heap fragment. The net result is that the
most simple Windows programs (native and managed) cannot allocate more
than ~1.2 GB or less , depending on OS version SP and OS language
version.

Willy.
 
C

Chris Mullins

Peter Olcott said:
Would you think that 18,000 hours would be enough thought? That's how much
I have into it.

18000 hour / 8 hours per day = 2250 days. (assuming 8 hours per day - if
you're spending more than that, it's probably detrimental).

There are about 260 working days per year (5 days per week * 52 weeks per
year).

2250 days / 260 days per year == 8.66 years.

That means you've spend 8 hours per day, 5 days per week, for 8.66 years
working on this algorithm. You're either unbelievably driven, or insane. At
this point, I could believe either. <Grin>.

I reall, really, still have to think that any algorithm that requires a
continous chunk of memory of size 1GB is flawed. I obviously don't have
enough information to make a more informed decision, but it's a HUGE red
flag. It's on par with "My application has 6125 threads running", or "string
sql = "Select * from master'"; ".
 
W

Willy Denoyette [MVP]

Chris Mullins said:
I'm running on WinXP64, which (if I remember right) plays some weird games with WOW for
x86 applications.

I seem to remember that it moves a number of things up into "higher" memory segments to
give more space to user applications. I could be misremembering things here though, as
it's not an area I've paid much attention to.

Not really, under WOW64 the modules are still loaded as under XP 32 bit (provided there is
not relocation needed), that is they load just below the 2GB address boundary, only
difference is that under WOW64 you can effectively access to the full 4GB address space for
the program (provided it's LARGEADDRESSAWARE), nothing to share with the OS :).

Willy.
 
P

Peter Olcott

Willy Denoyette said:
And it works for me!

No it can't, not for managed code nor for native code, because the 2GB space
per process must be shared between code and data, the code are the executable
module(s) and all it's dependent modules (DLL's) which are loaded when the
process starts. The result is that you don't have a contiguous area of 2GB for
the heap to allocate from.
More, as I said before, the DLL's might get loaded at some addressees which
fragments the heap in such a way that the largest FREE area of CONTIGUOUS
memory is much smaller than 2GB, possibly smaller than 1GB. Memory allocation
patterns may even further fragment the heap in such a way that even trying to
allocate a 1MB buffer will throw an OOM. And this all has nothing to do with
.NET, there is no such thing like a .NET process, at run-time there is no such
thing like a native or .NET process, only difference between .NET and native
is that the memory footprint is somewhat larger at process start-up because of
the .NET run-time and it's libraries, but this is less than 10MB and is just
overhead taken once.


Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the largest
Generic.List that the .NET runtime could handle, and took about 11-fold (1100%)
longer to do this. This would tend to indicate extensive use of virtual memory,
especially when this next benchmark is considered.

Generic.List was only 65% faster than native code std::vector when the amount of
memory allocated was about 1/2 of total system memory. So it looks like the .NET
run-time achieves better performance at the expense of not using the virtual
memory system.
 
P

Peter Olcott

Chris Mullins said:
Well, if the best solution you can come up with requires allocating 1GB
chunks, and you're woking on a computer that has only 1GB of physical memory,
I would say you're in some trouble.

At the very least, if you're dealing with memory chunks of that size, you
really need to be in x64 or IA64 land working on machines with signifigantly
more memory.

It is only in very rare cases that I ever need nearly 1 GB, or more. At least
90% of the time 200 MB should be plenty. When I first design this system it had
an 8 MB ceiling. Now that RAM is much cheaper, I added much greater
capabilities.
 
P

Peter Olcott

Chris Mullins said:
18000 hour / 8 hours per day = 2250 days. (assuming 8 hours per day - if
you're spending more than that, it's probably detrimental).

There are about 260 working days per year (5 days per week * 52 weeks per
year).

2250 days / 260 days per year == 8.66 years.

That means you've spend 8 hours per day, 5 days per week, for 8.66 years
working on this algorithm. You're either unbelievably driven, or insane. At
this point, I could believe either. <Grin>.

Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com
This technology can save business as much as billions of dollars every year in
reduced computer user labor costs. I spent about 1,000 hours (3.5 months of 12
hour days) trying to design around claim 16 of my patent and discovered that
there are no good alternatives to this technology.
 
C

Chris Mullins

Peter Olcott said:
Generic.List was only 65% faster than native code std::vector when the
amount of memory allocated was about 1/2 of total system memory. So it
looks like the .NET run-time achieves better performance at the expense of
not using the virtual memory system.

You really need to understand how this stuff works. .Net does use virtual
memory, just like every Win32 application. Using / Not Using Virtual memory
isn't the issue here.

I know I've said it, and Willy has said it, but you're running into
fragmentation issues.

Regardless, the most memory you're EVER going to be able to address on an
x86 system is 2Gigs. Of this 2 gigs, that includes all your code, libraries
you load, whatever initialization your runtime does, etc.

Trying to allocate 1GB of this as a single fragment is NEVER going to work
in a reliable way. It's not a .Net thing, it's an x86 thing.
 
P

Peter Olcott

Chris Mullins said:
You really need to understand how this stuff works. .Net does use virtual
memory, just like every Win32 application. Using / Not Using Virtual memory
isn't the issue here.

I know I've said it, and Willy has said it, but you're running into
fragmentation issues.

There is no fragmentation, all of the memory allocation in both programs is in
an identical tight loop. Since the native code can allocate 50% more memory than
the .NET code, and the .NET code is essentially identical to the native code,
the difference must be in the .NET run-time.
Regardless, the most memory you're EVER going to be able to address on an x86
system is 2Gigs. Of this 2 gigs, that includes all your code, libraries you
load, whatever initialization your runtime does, etc.

Native code can't address the whole 4GB address space? In any case 2 GB should
be plenty until the 64-bit memory architecture becomes the norm.
 
C

Chris Mullins

Peter Olcott said:
Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com

I beat ya too it. :)

I looked through that earlier, when I was trying to figure out what on earth
you were needing to allocate 1GB of memory for.

As an aside, from one business owner to another, you really need to focus on
the message there. I went through quite a bit of the site, and wasn't clear
on how it could save me money. I own a computer software company (and act
[most of the time] as Chief Architect) , and we do LOTS of testing for our
software. In that sends, I'm pretty close to the ideal customer. I realize
it helps with testing, and allows testing to be easier, but in terms of what
points of pain is it addressing, I really don't know.

I still don't see the answer to "I need 1 GB in a single array.". There's
gotta be a better algorithm you can use - or on problems that really need
it, force people to use x64.
 
P

Peter Olcott

Chris Mullins said:
Peter Olcott said:
Here is what I have spent 18,000 hours on since 1999:
www.SeeScreen.com

I beat ya too it. :)

I looked through that earlier, when I was trying to figure out what on earth
you were needing to allocate 1GB of memory for.

As an aside, from one business owner to another, you really need to focus on
the message there. I went through quite a bit of the site, and wasn't clear on
how it could save me money. I own a computer software company (and act [most
of the time] as Chief Architect) , and we do LOTS of testing for our software.
In that sends, I'm pretty close to the ideal customer. I realize it helps with
testing, and allows testing to be easier, but in terms of what points of pain
is it addressing, I really don't know.

Although I have a BSBA (business) degree I have spent my whole professional
career developing software. It took me many hundreds of hours to get the words
as clear and convincing as they are now. It will probably take an expert writer
many more hundreds of hours to get the words clear and convincing enough.

My current plan is to offer a combined GUI Scripting Language Mouse/Keyboard
macro recorder that can be used to place custom optimized user interfaces on top
of existing systems.
In addition to this product custom development using this product will be
provided as a service.

If it only saves 1/3 of the 75 million U.S. business computer users 5 minutes a
day, and they only make $10.00 an hour, this is worth billions of dollars per
year. Preliminary market studies indicate that saving 1/3 of all business
computer users an average of at least five minute a day is a reasonable
expectation.

My next big hurdle is to find a marketing partner with a little bit of capital.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top