Memory allocation in unmanaged C++

S

Steve McLellan

Hi,

Wondering if anyone can shed some light on something that's troubled us for
some time... we write computationally expensive image processing apps for
both Windows 98, 2K, XP and Mac OS X. We tile all our calculations both for
responsiveness and memory reasons, but originally we only did this after we
hit memory allocation problems under Windows. My question is, is there any
way to predict how much contiguous memory the OS will let you allocate at
once? The figure seems enormously lower than the actual system memory, which
I imagine is a result of things getting shoved into an application's memory
space wherever the OS pleases. Is there any lower bound on this (i.e. can I
ALWAYS be sure of getting say 128MB allocated in a block) or any way of
asking (nicely, of course) to shuffle things about for some more room? I'm
mainly concerned with doing this under XP, as we're not planning to support
earlier OSs for a current project.

Thanks,

Steve
 
C

Carl Daniel [VC++ MVP]

Steve said:
Hi,

Wondering if anyone can shed some light on something that's troubled
us for some time... we write computationally expensive image
processing apps for both Windows 98, 2K, XP and Mac OS X. We tile all
our calculations both for responsiveness and memory reasons, but
originally we only did this after we hit memory allocation problems
under Windows. My question is, is there any way to predict how much
contiguous memory the OS will let you allocate at once? The figure
seems enormously lower than the actual system memory, which I imagine
is a result of things getting shoved into an application's memory
space wherever the OS pleases. Is there any lower bound on this (i.e.
can I ALWAYS be sure of getting say 128MB allocated in a block) or
any way of asking (nicely, of course) to shuffle things about for
some more room? I'm mainly concerned with doing this under XP, as
we're not planning to support earlier OSs for a current project.

You're guaranteed to be able to allocate precisely 0 bytes contiguously,
AFIAK.

In practice, of course, it'll be much higher than that. The best thing you
can do to improve that figure is to explicitly set the base address on all
DLLs that you load such that they're clustered tightly together toward the
upper end of the user address space. If you look at the base addresses of
the DLLs that MS ships with the OS, you'll notice that they've all been
rebased into descending address ranges ending at 0x80000000 and working
down.

Another thing you can do is to request a large block of memory very early in
your program's execution - before you've loaded up a bunch of other things
to fragment the virtual address space.

Otherwise, there's not much else that you can do. Thread stacks, heap data
structures, and many other things compete for a single address space so it
natually tends to become fragmented. As a general (almost inviolate) rule,
these things can't be moved once they're allocated at a certain virtual
address.

HTH

-cd
 
J

John Biddiscombe

Have a look at
GlobalMemoryStatusEx
in win32 docs. you can query largest free block, then allocate that much

JB
 
S

Steve McLellan

Hi,

Thanks, both of you. Is the situation likely to be made worse or better
running unmanaged code from within a DLL under a .NET app? Who's in charge
of securing memory for the application in that case; does the DLL grab it
directly from the OS, or does the CLR stick its oar in as well / instead?
The problem I have is that it's likely there'll be an absolute minimum I can
get away with, and there'll be no way to test what that is since it'll be
different every time the app's run, let alone on different machines.

Thanks again,

Steve


John Biddiscombe said:
Have a look at
GlobalMemoryStatusEx
in win32 docs. you can query largest free block, then allocate that much

JB

[snip]
You're guaranteed to be able to allocate precisely 0 bytes contiguously,
AFIAK.
........
Another thing you can do is to request a large block of memory very
early
in
your program's execution - before you've loaded up a bunch of other things
to fragment the virtual address space.

Otherwise, there's not much else that you can do. Thread stacks, heap data
structures, and many other things compete for a single address space so it
natually tends to become fragmented. As a general (almost inviolate) rule,
these things can't be moved once they're allocated at a certain virtual
address.

HTH

-cd
 
J

Jon

One way might be to use CreateFileMapping with a INVALID_HANDLE_VALUE for the hFile.
Then use MapViewOfFile to access the memory.
 
C

Carl Daniel [VC++ MVP]

Running under a .NET app is likely to make matters worse, I'd guess - simply
because there's significantly more code mapped into the address space for
the CLR and BCL assemblies.

-cd

Steve said:
Hi,

Thanks, both of you. Is the situation likely to be made worse or
better running unmanaged code from within a DLL under a .NET app?
Who's in charge of securing memory for the application in that case;
does the DLL grab it directly from the OS, or does the CLR stick its
oar in as well / instead? The problem I have is that it's likely
there'll be an absolute minimum I can get away with, and there'll be
no way to test what that is since it'll be different every time the
app's run, let alone on different machines.

Thanks again,

Steve


John Biddiscombe said:
Have a look at
GlobalMemoryStatusEx
in win32 docs. you can query largest free block, then allocate that
much

JB

"Carl Daniel [VC++ MVP]"
Steve McLellan wrote:
Hi,

Wondering if anyone can shed some light on something that's
troubled us for some time... we write computationally expensive
image processing apps for both Windows 98, 2K, XP and Mac OS X. We
tile all our calculations both for responsiveness and memory
reasons, but originally we only did this after we hit memory
allocation problems under Windows. My question is, is there any
way to predict how much contiguous memory the OS will let you
allocate at once?
[snip]
You're guaranteed to be able to allocate precisely 0 bytes
contiguously, AFIAK.
.......
Another thing you can do is to request a large block of memory very
early in your program's execution - before you've loaded up a bunch
of other things to fragment the virtual address space.

Otherwise, there's not much else that you can do. Thread stacks,
heap data structures, and many other things compete for a single
address space so it natually tends to become fragmented. As a
general (almost inviolate) rule, these things can't be moved once
they're allocated at a certain virtual address.

HTH

-cd
 
J

Jon

The other nice thing about CreateFileMapping is that you can use a real file for the backing store, so that the object size is
limited by the space on your hard drive not by the virtual memory size.

Managed code and Unmanaged code have different memory managers. The thing you want to watch for is you need to use IDispose on
managed objects that own large unmanaged objects.

Steve McLellan said:
Hi,

Thanks, both of you. Is the situation likely to be made worse or better
running unmanaged code from within a DLL under a .NET app? Who's in charge
of securing memory for the application in that case; does the DLL grab it
directly from the OS, or does the CLR stick its oar in as well / instead?
The problem I have is that it's likely there'll be an absolute minimum I can
get away with, and there'll be no way to test what that is since it'll be
different every time the app's run, let alone on different machines.

Thanks again,

Steve


John Biddiscombe said:
Have a look at
GlobalMemoryStatusEx
in win32 docs. you can query largest free block, then allocate that much

JB

Carl Daniel said:
Steve McLellan wrote:
Hi,

Wondering if anyone can shed some light on something that's troubled
us for some time... we write computationally expensive image
processing apps for both Windows 98, 2K, XP and Mac OS X. We tile all
our calculations both for responsiveness and memory reasons, but
originally we only did this after we hit memory allocation problems
under Windows. My question is, is there any way to predict how much
contiguous memory the OS will let you allocate at once?
[snip]
You're guaranteed to be able to allocate precisely 0 bytes contiguously,
AFIAK.
.......
Another thing you can do is to request a large block of memory very
early
in
your program's execution - before you've loaded up a bunch of other things
to fragment the virtual address space.

Otherwise, there's not much else that you can do. Thread stacks, heap data
structures, and many other things compete for a single address space so it
natually tends to become fragmented. As a general (almost inviolate) rule,
these things can't be moved once they're allocated at a certain virtual
address.

HTH

-cd
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top