Garbage Colletor

  • Thread starter Johnny E. Jensen
  • Start date
P

Peter Duniho

Ben said:
I know exactly what you're getting at, but you and Peter are both wrong.

It must be satisfying to know that, even in a disagreement of semantics
(which is itself almost always a matter of subjective interpretation),
you are always right, and the other person is always wrong.
A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.

But you haven't defined "remains unavailable".

This is a semantic issue, and we can follow the chain of terminology as
deep as you like. You can't go around saying that your interpretation
is patently obvious while mine is obviously wrong. When dealing with
human language, hardly anything is ever truly obvious.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];
}

Of course that is a memory leak.

That most certainly is a leak by my definition. If you think otherwise,
I obviously haven't explained my definition very well. But that's an
error of communication, not of the definition itself.
And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }
}

C c = new C();

I disagree that that's a leak, sort of.

You still have a variable referencing the memory. But your class is
degenerate, providing no code whatsoever that uses the variable and so
literally speaking I suppose I'd say that's a leak. And it is so by my
definition, since the variable is not accessible by any code in the program.

But assuming you put the private static variable "a" there for a reason,
and assuming you actually wrote code somewhere in the class that uses
"a", then the failure to release the array later once you no longer need
it isn't what I'd call a leak. It's certainly a programmer error, and
it certainly does result in the application using more memory than it
should. But the memory is still accessible by an actual variable
storing the reference to the memory.
These cases are *identical*. Both int arrays are still accessible (in
native code, via HeapWalk, in managed, via reflection on the Type object
which, once loaded, is never freed until the AppDomain unloads), but are
also totally useless in the context given.

I wouldn't call the allocated memory "accessible" in your f() case.
Using HeapWalk or reflection doesn't count as "accessible" to the
program that actually allocated the memory. Or put another way, if you
choose to define memory that can be found via those means as
"accessible", IMHO you've just made the word "accessible" a completely
pointless word, as with a definition that broad (assuming you take it to
its logical conclusion) there is no such thing as memory that is NOT
"accessible".

Pete
 
P

Peter Duniho

Larry said:
Your're focusing the crux of your argument on some theoretical notion that
we live in a world without unmanaged resources.

Of course I am. Why should garbage collection be judged in any other
way? Why should garbage collection have to suffer in judgment because
of compromises it makes to play nice with existing systems?
Well explain to me what
resources you think the .NET classes are handling behind the scenes. This is
..NET for "Windows", not .NET for "Peter's purely managed OS X". In the world
we both live in the finalizer is a fact of life. What do you think the
"Note" section here means for instance:

http://msdn2.microsoft.com/en-us/library/system.drawing.font.dispose.aspx

I don't understand your question. Nothing about the note says that you
should be relying on the finalizer. In fact, the note is specifically
telling you that you should not.
I'm not aware of any "bug" so should I still "forget all about the
finalizer".

Yes. You should. You should write your code correctly, and in doing so
you will not need to concern yourself with the finalizer at all.
What if I'm holding an object that stores a network resource of
some type, possibly using a native .NET class that holds this. If I don't
explicitly release it then it might never get releasesd and my app might
eventually fail somewhere (after creating enough instances).

Possibly, yes. That's why you should always call Dispose(). You should
not rely on the finalizer, and the finalizer is irrelevant to this topic.

But for that matter, so too is Dispose(). It exists not because of
garbage collection per se. You would have to call an equivalent to
Dispose() whether or not you were using a garbage collecting system, but
(and this is very important) only because the _rest_ of the system
_isn't_ a garbage collecting system.
[...]
I'm talking about the (cleaner) syntax of
using a C++ destructor versus finalize/dispose for releasing *unmanaged*
resources.(not objects that live entirely on the managed heap and are
therefore cleaned up automatically).

And I'm saying that you can't judge the "cleanness" of a particular
paradigm by the compromises that must be made in order to work with some
other paradigm.

As long as you aren't dealing with unmanaged resources, you don't need
the "using" statement or Dispose(). That "less clean" code that you're
complaining about exists only for the benefit of the unmanaged code that
requires it, and it's not fair to judge garbage collection on that basis.

Pete
 
M

MikeP

And I'm saying that you can't judge the "cleanness" of a particular
paradigm by the compromises that must be made in order to work with some
other paradigm.

The other paradigm you're talking about is the very same paradigm we're all
working with. And these "compromises" you're referring to are fundamental
necessities required to support the unmanaged world we all live in. You need
to stop focusing on your hypothetical model that doesn't exist. When a
purely managed OS is available then you can have your cake. In the meantime
you have to call "Dispose()" rather than wait for the finalizer which might
never be called. It's cleaner to do that using a C++ destructor no matter
how much you seem to be dancing around the issue (which I've emphasized ad
nauseam now). In fact, explain to me how you can possibly have an agnostic
platform like .NET which doesn't handle unmanaged resources. When will the
version of Windows that supports this be ready (or any other OS for that
matter). In reality .NET has no choice but to support unmanaged resources so
why not address that instead of focusing on think-tank ideals.
 
P

Peter Duniho

MikeP said:
The other paradigm you're talking about is the very same paradigm we're all
working with. And these "compromises" you're referring to are fundamental
necessities required to support the unmanaged world we all live in. You need
to stop focusing on your hypothetical model that doesn't exist.

Why? I don't feel such a need. Who else has the authority to tell me
that in spite of that, I do have that need?
When a purely managed OS is available then you can have your cake.

I'm enjoying my cake right now, thank you. I don't actually have a
problem with the syntax required to mix .NET with the pre-existing
Windows OS behaviors.
[...]
In reality .NET has no choice but to support unmanaged resources so
why not address that instead of focusing on think-tank ideals.

In reality, why make any attempt to judge .NET on this arbitrary measure
of "cleanness" anyway?

I personally think the comparison is silly. But if one is going to make
the comparison, I don't think it's fair to describe .NET as "unclean"
just because it has to make compromises because the unmanaged paradigm
was here first.

You might as well say that the metric system is bad just because all
those metric wrenches don't fit your English bolt heads.

And if you think that's a fair way to evaluate the metric system,
well...that's the crux of the disagreement right there, and we will
never get past that.

Pete
 
J

John Duval

A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];

}

I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.
Of course that is a memory leak. And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }

}

C c = new C();

These cases are *identical*.

I disagree that these are identical, for a couple of reasons. First,
if you execute "new C()" N times, the memory will be allocated for "a"
just once, whereas executing "f()" N times will allocate the memory N
times.

Second, there is still a reference to the memory in the case of class
C. I understand your argument that you can get a reference using
thing like HeapWalk & reflection, but I think most people would agree
there is a qualitative difference between the two cases.

For example, it is *possible* to write a simple method for class C
that will let the GC reclaim the memory:

public static void CleanUp()
{
a = null;
}

There is no simple equivalent for the C++ case because you no longer
have a reference to the memory. And if there is a simple equivalent,
I stand corrected. In that case, I would love to see some sample code
-- I'm always happy to learn some new tricks.

John
 
L

Larry Smith

I'm logged back onto my original machine (and account name):
In reality, why make any attempt to judge .NET on this arbitrary measure
of "cleanness" anyway?

Because it affects your code. It's a trivial issue only however and hardly a
scathing indictment of .NET (which is an excellent system overall). I've
certainly never lost any sleep over it. Code should nevertheless be as clean
and concise as possible and the C++ destructor better promotes this than the
"using" statement. Relying on programmers to explicitly call "Dispose()"
isn't as safe either (though this is a topic for another day).
I personally think the comparison is silly. But if one is going to make
the comparison, I don't think it's fair to describe .NET as "unclean" just
because it has to make compromises because the unmanaged paradigm was here
first.

How it makes those compromises *is* signifcant. It's also a specious
argument IMO to suggest that any system (A) is inferior in some way only
because of the tradeoffs it must make to support another system (B). System
B is not inferior here but only different so system A's shortcomings are its
own. More to the point, part of system A's job *is* to support system B so
if it does it in a way that itself is substandard (even if you consider
system B inferior which would only be an opinion here), then system A is
still at fault.
You might as well say that the metric system is bad just because all those
metric wrenches don't fit your English bolt heads.
And if you think that's a fair way to evaluate the metric system,
well...that's the crux of the disagreement right there, and we will never
get past that.

If everyone was still working in imperial units then what good would it do
me.
 
P

Peter Duniho

Larry said:
[...] Code should nevertheless be as clean
and concise as possible and the C++ destructor better promotes this than the
"using" statement.

As I've already noted, the C++ destructor and the "using" statement or
Dispose() method don't do the same things. It's not sensible to compare
them.

If and when C++ has a way for me to call a destructor multiple times,
then perhaps we can revisit the question. Until then, they just aren't
comparable.
How it makes those compromises *is* signifcant. It's also a specious
argument IMO to suggest that any system (A) is inferior in some way only
because of the tradeoffs it must make to support another system (B).

I agree. That's my point. So why are you suggesting that system A is
inferior only because of the tradeoffs is must make to support system B?
System
B is not inferior here but only different so system A's shortcomings are its
own.

You keep asserting that "different == shortcomings". That's not true.

Different paradigms will always have to go outside their normal mode of
operation to support other paradigms. You can't judge a paradigm by the
concessions it needs to make to support other paradigms. Otherwise, the
paradigm that shows up first always wins.
If everyone was still working in imperial units then what good would it do
me.

What good would metric do you? You'd gain all of the benefits from the
use of metric that people do every day already. There's a reason the
most of the world uses metric now, and that reason isn't just that
"everyone else is using it".

Are you claiming that the only benefit the metric system has is that
other people use it? If not, then why is the question of what everyone
else is using relevant? (And if so, then how is that in this day and
age a person can be unaware of all of the other benefits that the metric
system offers?)

Pete
 
L

Larry Smith

Peter, this is a very simple matter and perhaps we're misunderstanding
eachother. A C++ programmer doesn't have to do anything to clean up
resources except write the destructor itself. Once written, you never have
to bother with it again since the destructor runs *automatically*. Even if
an exception is thrown, resources are cleaned up with no work on the part of
the developer (and no waiting until the GC runs which might never happen -
oops). By contrast, if I use a .NET object with a "Dispose()" method I now
have the following issues:

1) Clients have to explicitily call "Dispose()" or rely on the "using"
statement which just wraps the call to "Dispose()"
2) Step1 is an extra requirement/burden
3) Step 1 may be ignored by clients or an exception may be thrown before
"Dispose()" is called (really forcing clients to adopt the "using" statement
instead of calling "Dispose()" directly). "Finalize()" should then clean
things up. You've said ample times now that this is irrelevant and to just
ignore "Finalize()". It's *not* irrelevant because "Finalize()" may never be
called which could cause your app to break at some point. Just because you
may not be writing "Finalize()" itself doesn't mean you don't have to think
about the process or understand the issues. And if you do have to write it
on occasion then the details of juggling both "Finalize()" and "Dispose()"
aren't trivial. We haven't even touched upon a host of other issues that
Juval Lowy spells out in his book "Programming .NET Components" (and he's
recognized by MSFT as one of the world's top experts)
4) Step 1 is more verbose than a C++ destructor (read "uglier"). The C++
destructor has no (visible) usage footprint which makes for cleaner code.

It's that simple. No amount of debating that the GC is at the mercy of step
1 (through its own fault or not) changes the reality of steps 2-4.
 
J

Jon Skeet [C# MVP]

Larry Smith said:
Peter, this is a very simple matter and perhaps we're misunderstanding
eachother. A C++ programmer doesn't have to do anything to clean up
resources except write the destructor itself.

So long as the ownership of the object is clear, of course. You can use
reference counting for "shared" objects, but that then runs into the
problem of circular references.

If the C++ had been that straightforward and always foolproof, I think
it's fairly safe to bet that MS would have taken it when creating .NET.
 
C

Chris Mullins [MVP - C#]

[Garbage Collection - Memory Leaks]
So long as the ownership of the object is clear, of course.

I think that statement, right there, is really the key. In any environment
I've ever used, managing memory has been pretty easy so long as ownership of
something is clear.

The problems don't start to arise until some base class, somewhere deep
down, allocates some memory and hands it off. At that point, ownership
becomes very ambigious, and it's often impossible for the caller to to tell
"Do I free this memory or not?".

..Net has just as many problems here as any other langes. How many DAL's have
you seen that look like:
DataSet GetUser(int employeeId){} ?

This has some complicated questions associated with it - the DataSet has a
dispose method on it, but who is responsible for calling it? What if you
keep it around for a while, sitting in a cache? As systems get complex, this
question is often very difficult to answer.

For example, in the most basic sense, the GetUser method would go out to the
database, build a dataset, and the caller is responsible for Disposing the
dataset.

.... but after a round of performance tuning, the DAL could be sticking the
DataSet into a cache. When the method is called, it just returns the cached
copy. If the caller disposes the dataset now, the overally could be
comprimised.

This stuff gets complicated quickly. GC seems the best overally solution
I've seen to date, but there are certainly aspects of the C++ destructor
pattern, and the related std::auto_ptr<T> pattern, that I really, really
liked.
 
P

Peter Duniho

Larry said:
Peter, this is a very simple matter and perhaps we're misunderstanding
each other.

I don't think so. I think you have a fundamental bias against garbage
collection, and you are unwilling to view garbage collection from an
objective stance. You insist on placing garbage collection in a
specific context, and then evaluating it based on that prejudicial context.

I think we understand each other fairly well. The difference of opinion
is with respect to what each of us feel is fair treatment of a
particular issue, not some misunderstanding.
A C++ programmer doesn't have to do anything to clean up
resources except write the destructor itself.

Yes, they do. The destructor is only useful for cleaning up if they are
satisfied with only cleaning up the resources when the object is deleted.

IDisposable carries no such requirement.

Furthermore, in .NET an object that contains no unmanaged resources does
not require IDisposable at all. No "using", no Dispose() method to call
at all.

No matter how many times you write that it is required, it will never
actually be true. Any number of correctly-written .NET programs exist
without a single explicit call to Dispose() or use of the "using" statement.
Once written, you never have
to bother with it again since the destructor runs *automatically*. Even if
an exception is thrown, resources are cleaned up with no work on the part of
the developer (and no waiting until the GC runs which might never happen -
oops).

You keep claiming that there's this question of "waiting until the
garbage collector runs". There's not. That's a fallacy, and it's a
basic indication that you really just aren't understanding how the GC
paradigm works.

If your code relies on when the GC runs, your code is buggy. Period.
By contrast, if I use a .NET object with a "Dispose()" method I now
have the following issues:

1) Clients have to explicitily call "Dispose()" or rely on the "using"
statement which just wraps the call to "Dispose()"

This is simply false, for classes that don't use unmanaged resources.

I have been assaulted a couple of times in my life by a black man. The
fact that there are black men out there who have assaulted me doesn't
mean that all black men are trouble. In fact, quite the opposite is true.

Likewise, the fact that _some_ .NET classes require the use of
IDisposable in order to work with garbage collection in no way means
that garbage collection itself has a problem.

No matter how many times you try to write about IDisposable, it's still
going to be a red herring.
2) Step1 is an extra requirement/burden

First of all, I don't see why you are using the word "step". A "step"
would be an item in a sequence of actions. Last I saw, you were
enumerating "issues", not "steps".

Secondly, how is this an "issue"? Looks like "issue inflation" to me.
All you are doing is reiterating the first issue you claim (which is
false anyway).
3) Step 1 may be ignored by clients or an exception may be thrown before
"Dispose()" is called (really forcing clients to adopt the "using" statement
instead of calling "Dispose()" directly).

Since I have already shown you that IDisposable is irrelevant, it's not
reasonable to use the requirements imposed by IDisposable to show fault
with garbage collection. Garbage collection and exceptions live just
fine together, when they are not dragged down by legacy code.

Furthermore, C++ does not actually fix the exception issue. If you put
a C++ class on the stack as a local variable, it works fine. But not
all C++ classes can be instantiated as a local variable; some MUST be
allocated via new, just like .NET classes. Or in some cases, it's just
preferable to use new. Either way, you still have to catch an exception
and clean them up explicitly.

I call red herring on this one too.
"Finalize()" should then clean
things up. You've said ample times now that this is irrelevant and to just
ignore "Finalize()".

That's right. It's irrelevant.
It's *not* irrelevant because "Finalize()" may never be
called which could cause your app to break at some point.

If the finalizer has to be called, then your app is ALREADY BROKEN.

In a correctly written application, THE FINALIZER WILL NEVER BE CALLED.

In some respects, in spite of the similar syntax, a finalizer is exactly
the opposite of a destructor, because the destructor is something that
is called in correctly-written applications, while the finalizer is
something that is only called in badly-written applications.
Just because you
may not be writing "Finalize()" itself doesn't mean you don't have to think
about the process or understand the issues.

Yes, it does.
And if you do have to write it
on occasion then the details of juggling both "Finalize()" and "Dispose()"
aren't trivial.

Show me some code that calls Finalize() directly, and I'll show you some
code that has some serious architectural problems.
[...]
4) Step 1 is more verbose than a C++ destructor (read "uglier"). The C++
destructor has no (visible) usage footprint which makes for cleaner code.

More "issue inflation". Your only objection to "issue 1" is that it has
to be written. That is, it's "more verbose". So saying it's "more
verbose" again doesn't actually create a new issue.
It's that simple. No amount of debating that the GC is at the mercy of step
1 (through its own fault or not) changes the reality of steps 2-4.

I see no reality in any of the "steps" you've written about. Just a
bunch of red herrings and falsehoods.

Pete
 
J

Jon Skeet [C# MVP]

This stuff gets complicated quickly. GC seems the best overally solution
I've seen to date, but there are certainly aspects of the C++ destructor
pattern, and the related std::auto_ptr<T> pattern, that I really, really
liked.

Absolutely. It's going to be interesting to see whether the successor
to .NET, whenever it comes, has a better story around this. It's not
the only aspect of development which appears to be slightly lacking
(and not for want of effort) - there are fundamentally difficult
problems, but whether they turn out to be fundamentally "impossible to
solve" problems remains to be seen :)
 
P

Peter Duniho

Ben said:
If it counts for .NET, it should also count for C++.
Agreed.

With a definition of
accessible which includes HeapWalk, neither C++ nor .NET can leak memory.
Agreed.

With a narrower definition of accessible, the .NET GC can now leak memory
(the object is "reachable" according to the garbage collector, but not in
any way that is useful to the programmer, hence a leak).
Huh?

For example, in
..NET registering an object to a static event would leak that object. Here:

class AutoCleanup : IDisposable
{
private void Clean(object sender, EventArgs e) {}
public AutoCleanup() { AppDomain.Current.Shutdown += Clean; }
public void Dispose() { } // bug, forgot "AppDomain.Current.Shutdown -=
Clean;", therefore we leak
}

foreach (/* many many objects */) {
using (AutoCleanup ac = new AutoCleanup()) {
}
}

I don't understand your example. Is your concern the difficult in
retrieving the event handler reference from the Shutdown event? It
seems as though you are trying to say that the list of event handlers is
not accessible by the code, but that's simply false. It may not be
accessible by the code that added the handler, but that's not the same
thing.

This example seems to be basically a more complicated version of your
degenerate private class member. And just like that one, it doesn't
prove the point you seem to be trying to prove. In particular, the only
thing preventing the reference from being accessible is the protection
level of the variable. There still is a variable referencing the data,
and there can easily be some code somewhere that can get at that data.
..NET can leak objects just like native C++ can.

I disagree.

Pete
 
P

Peter Duniho

Ben said:
This isn't true. First up, show me a C++ class that can't be instantiated
locally.

I have seen code designed such that it relied on the behavior of the
heap allocation as part of the object. The base class knew about the
heap, and if you tried to declare the class as a local variable, it
wouldn't work right.
Secondly, even when you do instantiate from the heap, you can use
a smart pointer and again avoid explicit cleanup.

That's not an inherent part of C++ though and on top of that it's no
more "clean" than the "using" statement.
Third, this isn't C++ vs
..NET, because C++/CLI provides automatic cleanup for garbage collected,

That's fine. I don't use managed C++ enough to be aware of the special
differences that exist. It's not really C# or .NET that I'm defending
anyway; it's the basic paradigm of using garbage collection.

Pete
 
B

Ben Voigt [C++ MVP]

John Duval said:
A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes
sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];

}

I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.

I quote you: "to me a memory leak means a very specific thing, which is that
the memory has been orphaned and there is no reference anywhere that can be
used to recover it". Such a reference certainly does exist, and HeapWalk
can be used to obtain it.
 
B

Ben Voigt [C++ MVP]

I wouldn't call the allocated memory "accessible" in your f() case. Using
HeapWalk or reflection doesn't count as "accessible" to the program that
actually allocated the memory. Or put another way, if you choose to
define memory that can be found via those means as "accessible", IMHO
you've just made the word "accessible" a completely pointless word, as
with a definition that broad (assuming you take it to its logical
conclusion) there is no such thing as memory that is NOT "accessible".

If it counts for .NET, it should also count for C++. With a definition of
accessible which includes HeapWalk, neither C++ nor .NET can leak memory.
With a narrower definition of accessible, the .NET GC can now leak memory
(the object is "reachable" according to the garbage collector, but not in
any way that is useful to the programmer, hence a leak). For example, in
..NET registering an object to a static event would leak that object. Here:

class AutoCleanup : IDisposable
{
private void Clean(object sender, EventArgs e) {}
public AutoCleanup() { AppDomain.Current.Shutdown += Clean; }
public void Dispose() { } // bug, forgot "AppDomain.Current.Shutdown -=
Clean;", therefore we leak
}

foreach (/* many many objects */) {
using (AutoCleanup ac = new AutoCleanup()) {
}
}

..NET can leak objects just like native C++ can.
 
B

Ben Voigt [C++ MVP]

Furthermore, C++ does not actually fix the exception issue. If you put a
C++ class on the stack as a local variable, it works fine. But not all
C++ classes can be instantiated as a local variable; some MUST be
allocated via new, just like .NET classes. Or in some cases, it's just
preferable to use new. Either way, you still have to catch an exception
and clean them up explicitly.

This isn't true. First up, show me a C++ class that can't be instantiated
locally. Secondly, even when you do instantiate from the heap, you can use
a smart pointer and again avoid explicit cleanup. Third, this isn't C++ vs
..NET, because C++/CLI provides automatic cleanup for garbage collected,
IDisposable objects as well. It's actually C++ vs C#, because C# places
extra unnecessary burden on the programmer (remembering to apply a "using"
block).
 
W

Willy Denoyette [MVP]

Ben Voigt said:
This isn't true. First up, show me a C++ class that can't be instantiated
locally. Secondly, even when you do instantiate from the heap, you can
use a smart pointer and again avoid explicit cleanup. Third, this isn't
C++ vs .NET, because C++/CLI provides automatic cleanup for garbage
collected, IDisposable objects as well. It's actually C++ vs C#, because
C# places extra unnecessary burden on the programmer (remembering to apply
a "using" block).



The trouble with C++/CLI is that a number of IDisposable Framework classes
cannot be used with stack allocation semantics.

Try this in ++/CLI:

WindowsIdentity wi = WindowsIdentity::GetCurrent();

it'll fail to compile ...
so, you are forced to resort to heap based semantics, like this..

WindowsIdentity^ wi = WindowsIdentity::GetCurrent();

and as such, to Dispose of this instance by:

delete wi;

right, you are back at square zero, using explicit try/finally, something
C++/CLI's *stack allocation* idiom was meant to arrange for you.

The Framework 3.0 and 3.5 adds more of this, guess how long it will take
before one goes back to use heap allocated semantics consistently, or end
with a couple of latent bugs.


Willy.
 
J

John Duval

A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes
sense.
After all, by your definition, this isn't a memory leak (C++):
void f(void)
{
int* p = new int[1024];
}
I don't follow what you mean that my definition says this is not a
memory leak. Once f() completes and p goes out of scope, there is no
reference that can be used to recover the memory.

I quote you: "to me a memory leak means a very specific thing, which is that
the memory has been orphaned and there is no reference anywhere that can be
used to recover it". Such a reference certainly does exist, and HeapWalk
can be used to obtain it.- Hide quoted text -

- Show quoted text -

Hi Ben,
I guess I just don't see how HeapWalk could be used to identify which
blocks of memory to be recover. I see how you can use HeapWalk to
iterate through the heap entries, but how would you know which ones
need to be recovered?

John
 
B

Ben Voigt [C++ MVP]

This example seems to be basically a more complicated version of your
degenerate private class member. And just like that one, it doesn't prove
the point you seem to be trying to prove. In particular, the only thing
preventing the reference from being accessible is the protection level of
the variable. There still is a variable referencing the data, and there
can easily be some code somewhere that can get at that data.

Oh, so now even HeapWalk isn't necessary. As long as the internal heap
manager has internal variables that can get at that data, it isn't leaked?

That, in my opinion, is not a very useful definition of a memory leak.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top