No memory although more than 1GB free

I

Igor Tandetnik

André Pönitz said:
In microsoft.public.vc.stl Igor Tandetnik said:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will eat
up 1GB of RAM in a way that only 1MB is actually used, but you cannot
allocate another 1MB chunk. It's an artificial example of course, it's
highly unlikely to occur in practice, but it demonstrates the idea of
fragmentation nicely.

Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]

It may or it may not, I don't know. Whatever strategy a particular
memory allocator employs, it is always possible to construct a
pathological sequence of allocations and deallocations that leads to
fragmentation with this allocator. In my example, I assumed a naive
allocator so that a pathological case would be easy to explain and
understand.
--
With best wishes,
Igor Tandetnik

"For every complex problem, there is a solution that is simple, neat,
and wrong." H.L. Mencken
 
K

Klaus Bonadt

Which part of "each process has a separate virtual address space" do you
have difficulty understanding? A success or failure of a memory
allocation in one process tells you nothing about the state of the
virtual address space in another. It's like saying: "I have machine A
where memory allocation fails. Then I run a test program on machine B,
and it can successfully allocate plenty of memory. So enough memory
should be avaiable on machine A, right?"

I was not aware of your reply, when I wrote this. I got it.
However, could you take a look to my answer regarding André?
After understanding the concept, I am now interesting in figuring out how to
avoid fragmentation.

Regards,
Klaus
 
N

Nick Savoiu

André Pönitz said:
In microsoft.public.vc.stl Igor Tandetnik said:
That's why I think there's fragmentation at work. Imagine the
pathological scenario: your whole memory is occupied by 1K allocated
chunk followed by 1023K free chunk, and so on. 1024 such pairs will eat
up 1GB of RAM in a way that only 1MB is actually used, but you cannot
allocate another 1MB chunk. It's an artificial example of course, it's
highly unlikely to occur in practice, but it demonstrates the idea of
fragmentation nicely.

Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]

It could but there isn't a perfect solution. Most of the time one won't run
into this problem. If one does then they are also better positioned to fix
it as they know more about how things are and will be allocated than the OS.
MS tries to alleviate this by providing:

http://msdn.microsoft.com/library/d.../en-us/memory/base/low_fragmentation_heap.asp

Nick
 
K

Klaus Bonadt

Not a problem. We're all here to hopefully learn something. Try re-reading
Igor's posting again. He explains quite well what goes on with virtual/real
memory and address spaces.

I wrote my answer to you before reading Igor's reply. Anyway, thanks a lot.

Regards,
Klaus
 
S

Stephen Howe

Thanks a lot, Igor, I got it!

Which container types of the STL are you using Klaus?
And which version of VC++ are you using?

Stephen Howe
 
K

Klaus Bonadt

Which container types of the STL are you using Klaus?
And which version of VC++ are you using?

In first place I use map and vector with VC++ version 6 with some header
replacements.

Regards,
Klaus
 
?

=?iso-8859-1?Q?Andr=E9_P=F6nitz?=

In microsoft.public.vc.stl Igor Tandetnik said:
Does that mean the allocator does _not_ try to collect chunks
of'almost the same size' in one place and different sizes in
another one? [I am talking about virtual adresses here]

It may or it may not, I don't know. Whatever strategy a particular
memory allocator employs, it is always possible to construct a
pathological sequence of allocations and deallocations that leads to
fragmentation with this allocator.

I don't think this is true for _all_ allocators.

Just assume an allocator with a fixed size bucket for chunks of up to
8 bytes, a fixed size bucket for 9 up to 16 bytes, one for 17 to 32 etc.

Of course, this is a waste of ressources as allocation will fail if the
8 byte bucket is full even if the 1024 byte bucket is empty, but this
allocator will never fragment. You can delete a 1024 byte chunk, do
whatever you want with allocations of another size (range) and are
guaranteed to get back a 1024 byte chunk next time you try.

Andre'
 
?

=?iso-8859-1?Q?Andr=E9_P=F6nitz?=

In microsoft.public.vc.stl Nick Savoiu said:
Does that mean the allocator does _not_ try to collect chunks of 'almost
the same size' in one place and different sizes in another one? [I am
talking about virtual adresses here]

It could but there isn't a perfect solution. Most of the time one won't run
into this problem. If one does then they are also better positioned to fix
it as they know more about how things are and will be allocated than the OS.
MS tries to alleviate this by providing:

http://msdn.microsoft.com/library/d.../en-us/memory/base/low_fragmentation_heap.asp

Ah, I see. Thank you for the link.

I always considered this a standard solution for the problem and did not
know that it takes extra effort to make this work under Windows XP.

Andre'
 
I

Igor Tandetnik

André Pönitz said:
I don't think this is true for _all_ allocators.

Just assume an allocator with a fixed size bucket for chunks of up to
8 bytes, a fixed size bucket for 9 up to 16 bytes, one for 17 to 32 etc.

Of course, this is a waste of ressources as allocation will fail if the
8 byte bucket is full even if the 1024 byte bucket is empty, but this
allocator will never fragment.

Maybe "fragmentation" is not the right term. Of course there exists an
allocator that never causes memory fragmentation - for example, one that
fails every single allocation request. That one would be very fast, too.
And thread-safe. Too bad it's not particularly useful.

I'm talking about a situation where an allocator fails to allocate a
contiguous chunk of memory of a specific size even though enough free
memory is available to satisfy the request. It may be caused by
fragmentation or by something else (like a fatal flaw in allocator's
design, as in your example). I claim that for any allocator, there
exists a sequence of allocations and deallocations that leads to this
situation (as long as the allocator is not allowed to compact the heap,
as is the case with C and C++ allocators). In your example, one such
sequence is repeatedly allocating 8 byte chunks.
--
With best wishes,
Igor Tandetnik

"For every complex problem, there is a solution that is simple, neat,
and wrong." H.L. Mencken
 
R

Reginald Blue

André Pönitz said:
It might be a bit off-topic by now, but what is the largest chunk of
'flat memory' I can expect to get on a 32bit Intel system stuffed with
e.g. 3GB of RAM and running not much else than XP?

The theoretical best is 2GB. In practice, it's less than that, because the
code takes up space, as does the stack, and the data segment. But it's
fairly close to 2 GB.

It's possible to increase that by a tweak to the OS which takes it up closer
to 3GB, and there are OS calls to get non-flat space (overlays) which can
take you up to (IIRC) 64GB (maybe only 32...not sure now.)

--
Reginald Blue
"I have always wished that my computer would be as easy to use as my
telephone. My wish has come true. I no longer know how to use my
telephone."
- Bjarne Stroustrup (originator of C++) [quoted at the 2003
International Conference on Intelligent User Interfaces]
 
?

=?iso-8859-1?Q?Andr=E9_P=F6nitz?=

In microsoft.public.vc.stl Reginald Blue said:
The theoretical best is 2GB. In practice, it's less than that, because the
code takes up space, as does the stack, and the data segment.

Ok, that should not be too much as long none of these segments is
placed deliberately in the middle of the big chunk..
But it's fairly close to 2 GB.

It's possible to increase that by a tweak to the OS which takes it up closer
to 3GB, and there are OS calls to get non-flat space (overlays) which can
take you up to (IIRC) 64GB (maybe only 32...not sure now.)

Thank you for this information.

[This is about what I had expected, but there was some discussion
recently (here in my place) where people suggested that's not easy to
get more than a couple of hundred MB of flat address space in NT or XP.
As my own background is rather Unix-ish I finally had to retreat
(although unconvinced) as my only argument was 'They surely would not
waste resources like _that_'... Now I feel a bit better ;-)]

Andre'
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top