W
Willy Denoyette [MVP]
Peter Olcott said:Here are the final results:
Visual C++ 6.0 native code allocated a std::vector 50% larger than the largest
Generic.List that the .NET runtime could handle, and took about 11-fold (1100%) longer to
do this. This would tend to indicate extensive use of virtual memory, especially when this
next benchmark is considered.
Generic.List was only 65% faster than native code std::vector when the amount of memory
allocated was about 1/2 of total system memory. So it looks like the .NET run-time
achieves better performance at the expense of not using the virtual memory system.
NET is not kind of an alian, it's just a thin layer in top of Win32 it uses the same system
services as native code compiled with whatever compiler you can use on Windows, the CLR and
GC re allocating memory from the heap (that is from Virtual memory)through the same calls as
the C runtime library, and do you know why? Because the CLR uses the same C runtime and
there is no other way to allocate memory in Windows.
As we told you before, the process heap is fragmented from the start of the program, the way
it's fragmented is determined by the modules loaded into the process space, so, there might
be a difference between different type of applications. Native C++ console applications
don't have to load the CLR runtime and some of the FCL libraries, that means that the heap
is less fragmented as it's the case with a C# console program, but a real world C++ program
also needs to load libraries, and these will fragment the heap just like in the case of
..NET.
As I said (and others too) each time the List (or vector) overflows it must be extended,
please refer to my prevous post to know exactly what this is all about. To prevent this you
have to pre-allocate the List or vector.
Running following code won't throw an OOM when run on 32 bit windows XP.
static void Main()
{
List<byte> bList;= new List<byte>(1600000000); // 1.600.000.000 bytes
for(int i = 0; i < bList.Capacity; i++)
bList = 12;
}
while this will throw...
bList = new List<byte>();
for(int i = 0; i < 600000000 ; i++) // 600.000.000 bytes
bList.Add(12);
but 512.000.000 bytes will work...
Now back to C++, this will throw.
#include <vector>
#include <iostream>
static void main()
{
std::vector<unsigned char> *bList = new std::vector<unsigned char>;
try {
for(int i = 0; i < 700000000; i++) // 700.000.000 bytes
bList->push_back(12);
}
catch( char * str ) {
std::cout << "Exception raised: " << str << '\n';
}
}
while 640.000.000 bytes may work.
But what's the difference, 100MB? point is that you can't allocate the full 2GB and you need
to pre-allocate whenever you are allocating that huge objects (> 512MB) on 32 bit systems,
native code or managed code it doesn't matter.
And don't let me get started about the performance implication by not doing so!!!!!
Willy.