Garbage Colletor

  • Thread starter Johnny E. Jensen
  • Start date
P

Peter Duniho

Chris said:
Well, for what it's worth, in a 3 hour presentation on application Tuning,
about 1/2 that time is spent on Memory Leaks, and how to track them down.

I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.

I understand how you might call your example of a static variable that
isn't released a "memory leak", but the memory hasn't become orphaned or
anything. The application simply failed to release it, and the same
kind of "leak" exists regardless of the type of memory management (ie it
would be just the same error for a CRT-based program to fail to release
that memory).

To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere
to ever recover it. Whether or not such code exists isn't relevant to
my use of the term. It's whether it _could_ exist.

At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".

Pete
 
E

Enkidu

Göran Andersson said:
Why a return statement here? It's totally superflous as there is no
more code from this point to the end of the method.
Some languages require it and some people moving from those languages
would feel better with it there. Sure they could do it the C# way and
leave it out, but it causes no problems, does it.

Also, more pertinently, if you should come along and modify the code, by
wrapping it in an 'if' for example and also put code after it, then you
have to remember to insert the 'return' if it is not already there.

If you modify the routine to return something, the return is already
there to be modified.

I'm not saying that these are particularly *good* reasons, mind you..

Cheers,

Cliff
 
P

Peter Duniho

Larry said:
It's a religious issue and both have their pros and cons. While C++ may be
more error-prone than a GC system however (for releasing resources), there
need not be a significant difference.

I certainly agree there. But one system is definitely more resilient to
programmer error.
The problem is almost entirely a human
one. It's extremely easy to handle resources in C++ as you stated but the
language's reputation has often suffered because of its practioners (most
programmers being very poor at what they do). My own opinion however is that
in the hands of those who really know what they're doing (few and far
between),

I think we've found ourselves in vehement agreement on that point
previously. I'm still in vehement agreement with you on it. :)
RAII is a cleaner approach than a GC system. By clean I mean it's
more natural to release your resources in a destructor than to wait for a GC
to run (not to be confused with easier or less error prone - it's clearly
not).

Well, the thing is...once you no longer have a reference to a memory
resource, it _is_ released. Just because the GC hasn't run, that
doesn't mean the memory isn't available. It just means that the GC
hasn't gotten around to moving it into the collection of memory that is
_immediately_ available for use.

Logically speaking, the memory is still in fact available, the moment
you release your last reference to it.
The "using" statement for performing this in C# for instance (or even
worse, a "finally" block), is ugly compared to a C++ destructor.

IMHO, it's a mistake to think of the "using" statement, the finalizer,
or IDisposable as related to a C++ destructor. It's only "ugly"
compared to it if you are treating them the same.

The "using" statement and IDisposable exist for one purpose: to
explicitly release resources held by an object without releasing the
object itself. Within that one purpose, there are two sub-categories of
types of resources that may be released: managed, and unmanaged.

Obviously the only reason the unmanaged category even exists is that
..NET runs on top of a system that is not entirely managed code. If all
of Windows was based on a garbage collection system, that category
wouldn't exist.

So, let's consider only the managed category. In this case, it's
beneficial to be able to tell an object "let go of the resources you're
holding" without releasing that object itself. But this is again not
comparable to a destructor, because the object itself still exists. It
hasn't been released or destroyed and it is theoretically possible that
it could be reused. This is much more comparable to a C++ class that
hasn't been deleted, and thus hasn't been destroyed, but which has some
sort of "release your resources" function that has been called.
The destructor is nicely symmetrical with its constructor.
The latter initializes the object and the former cleans it up.

And in C# not having a reference to an object is nicely symmetrical to
creating a reference to an object. The latter initializes the object
and the former cleans it up. In a purely managed environment, releasing
a single reference to an object is exactly equivalent to the C++
paradigm of having to call a destructor where individual resources
within the object have to be explicitly cleaned up.

In fact, the garbage collection model is, at least for that particular
operation, much more efficient, because there's no need to go through
the entire object cleaning things up. Everything that object refers to
is automatically released, with a single nulling, or leaving scope, of
the last variable holding a reference to that object.

Overall, I suspect the efficiency is about the same. The extra work
that the C++ model has to do initially is balanced by the extra work the
garbage collector will have to do later.
This occurs immediately
when an object goes out of scope so your resources only exist for as long as
they're needed.

Likewise, using GC as soon as an object is no longer referenced, any
managed resources no longer exist. They are automatically released when
that object referencing them is.
It's all very well controlled and understood. You know
exactly when clean up is going to occur and need not worry the timing of a
GC.

But why do you care when the GC is going to occur? It only happens when
it needs to, or when it gets the opportunity to, and there should be
nothing in your code that depends on or otherwise relies on when, if at
all, garbage collection happens.

In a multi-tasking operating system like Windows, there are a wide
variety of things that occur and which you have no control over. A
garbage collection system simply introduces a new instance to this
already very broad category of components.
In fact, a GC itself can even promote sloppy behaviour. People become
so used to it doing the clean up that they can neglect to explicitly clean
something up themselves when the situation calls for it

I don't understand that at all. A person who isn't used to releasing a
reference to an object when they are done with it isn't going to be used
to deleting a C++ object when they are done with. Conversely, a person
who can remember to delete a C++ object when they are done with it can
remember to release a reference to a .NET object when they're done with it.
(such as immediately
releasing a resource that might later cause something else to fail if it
hasn't been GC'd yet).

What kind of resource? If you're talking about a managed resource, then
simply releasing the reference to the referencing object is sufficient
to release the resource.

If you're talking about an unmanaged resource, well...that's not a
problem inherent with garbage collection. It's a natural consequence of
mixing a garbage collection system with a traditional alloc/free system.
That problem _only_ exists because of the traditional alloc/free
system; it hardly seems fair to blame it on the garbage collection paradigm.
Or people might always perform shallow copies of
their objects where a deep copy is required (since it's just so easy).

This one I understand even less. If a deep copy is required but a
shallow copy is done, this is if anything more dangerous in the C++
model, because the referenced data can be freed by any one copy of the
instance. This just won't happen in a garbage collection system.

With a GC system, you still have the potential issue of having multiple
instances refer to the same data, but this issue exists regardless of
the memory management model. It's an implementation problem, not a
memory management problem.
In
C++ you have to think about these things more but that deeper thinking
process also sharpens your understanding of the issues IMO (and hopefully
the design of your app). Of course this is all a Utopian view of things. In
the real world most programmers require the handholding that a GC offers.

I generally agree that it is good to think more deeply about what is
going on. A person who understands better the lower levels is almost
always going to be able to use use the higher level API more
effectively. But I don't see how that makes the C++ model necessarily
better; either model has some lower level implementation details that
are important to understand for most effective use, and C++ has just the
same potential for someone failing to bother to learn those lower level
implementation details as .NET does.

And I still think that people scoff at garbage collection at least as
much as they do the more traditional C++ model.

Pete
 
C

Chris Mullins [MVP - C#]

[What's a Memory Leak?]

Peter Duniho said:
I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.

The definition will vary, but even then, saying GC precludes them isn't
quite right.

There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted

Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....

If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.
To me, a true "memory leak" is a situation in which the memory is gone
forever for that process. There's simply no way for any code anywhere to
ever recover it. Whether or not such code exists isn't relevant to my use
of the term. It's whether it _could_ exist.

I think that's a misleading definition.

A leak, even in C/C++ land, is generally characterized by an application
bug. Sometimes it's as simple to fix as, "use an auto pointer", and other
times it's very complex. The same seems to hold true in .Net.
At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".

Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...
 
P

Peter Duniho

Chris said:
[What's a Memory Leak?]

Peter Duniho said:
I guess that depends on your definition of "memory leak". Mine is such
that a garbage collection system specifically precludes them.

The definition will vary, but even then, saying GC precludes them isn't
quite right.

As I said, my definition does.
There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted

Do you mean there is some programming error that would cause that to
happen? Can you be more specific?
Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....

But those would be arrays the application still holds a reference to,
no? If not, why wouldn't they be able to be reclaimed?
If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.

I'm specifically talking about memory leaks. There are, of course,
other ways to interfere with memory allocations, such as fragmenting the
heap. That's outside the scope of what I'm talking about.
I think that's a misleading definition.

Well, I'm happy to agree to disagree. But just as I suspect there's at
least one person who agrees with your viewpoint, I think it's likely
there's at least one person who agree with mine. We're talking about a
semantic issue here, and those are almost never black & white.

Even if you don't agree with a particular viewpoint, you should at least
take it into account.
A leak, even in C/C++ land, is generally characterized by an application
bug.

Agreed. But I don't agree that all memory-related bugs are examples of
"leaks". A leak is a bug, but not all bugs are leaks.
[...]
At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".

Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...

Your choice, of course. However, from my own personal point of view,
the fact that I might be new to .NET does not negate any previous
experience I might have, nor does it change how I view the definition of
a "memory leak".

Whether that's an issue in your presentation depends more on how many,
if any, of your audience shares my viewpoint than on how much, if
anything, they already know about .NET.

Pete
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Enkidu said:
Some languages require it and some people moving from those languages
would feel better with it there. Sure they could do it the C# way and
leave it out, but it causes no problems, does it.

Also, more pertinently, if you should come along and modify the code, by
wrapping it in an 'if' for example and also put code after it, then you
have to remember to insert the 'return' if it is not already there.

On the other hand, you could just as well add code that you actually
want to be executed in both cases, so then you would have to remember to
remove the return statement. :)
 
L

Larry Smith

This is what I mean by "ugly":

void CSharpFunc()
{
using (MyExpensiveObject obj = new MyExpensiveObject())
{
// ...
}
}

This OTOH achieves "deterministic finalization" automatically and it's
syntactically cleaner:

void CPlusPlusFunc()
{
MyExpensiveObject obj;

// ...
}
 
P

Peter Duniho

Larry said:
This is what I mean by "ugly":

void CSharpFunc()
{
using (MyExpensiveObject obj = new MyExpensiveObject())
{
// ...
}
}

This OTOH achieves "deterministic finalization" automatically and it's
syntactically cleaner:

void CPlusPlusFunc()
{
MyExpensiveObject obj;

// ...
}

But those two functions aren't doing the same thing.

The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}

As I pointed out, "using" is used for a completely different purpose.
It's a mistake to think of it as the same as a C++ destructor.

In fact, in .NET the runtime is smart enough to recognize when a
reference is not actually used throughout a function, and will in that
case release the reference _earlier_ than would be the case for C++.

So not only is the code no "uglier", the lifetime of the object in .NET
much more exactly matches its actual use than it does in C++.

Pete
 
L

Larry Smith

But those two functions aren't doing the same thing.
The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}

In fact they are doing the same thing for all intents and purposes. I have
no reason to allocate my C++ object on the free-store (heap) in this
scenariio but even if I did, I can always assign the pointer to an
"auto_ptr" or some other smart-pointer object. It's still syntactically
cleaner than the C# version. In practice however you need not rely on local
pointers most of the time (or even "new" for that matter) so it's usually a
non-issue.
As I pointed out, "using" is used for a completely different purpose. It's
a mistake to think of it as the same as a C++ destructor.

I realize it's not the same thing. It's just syntactic sugar for a call to
"Dispose()". It's intended to "destroy" the object however and once called,
you shouldn't access the object again.
In fact, in .NET the runtime is smart enough to recognize when a reference
is not actually used throughout a function, and will in that case release
the reference _earlier_ than would be the case for C++.

When an object's reference is no longer accessible, the call to "Finalize()"
is non-deterministic. It normally happens when memory exhaustion occurs
which triggers the GC. In theory however it might never occur so
"Finalize()" itself might never be called. GC is also a very expensive
process. The GC even suspends all threads in the application to carry out
its work by injecting the code to do this at strategic locations.
Implementing a proper clean-up routine also raises a host of other
housekeeping chores such as calling "SuppressFinalize()" in your "Dispose()"
method, handling multiple calls to "Dispose()", etc (note that both
"Dispose()" and "Finalize()" should also be routed through the same common
cleanup function). The bottom line in any case is that you have no choice
but to release your resources explicitly if you can't wait for the GC to
invoke "Finalize()". The syntax for that is unsightly however (IMO).
So not only is the code no "uglier", the lifetime of the object in .NET
much more exactly matches its actual use than it does in C++.

How do you arrive at that conclusion. The call to "Finalize()" is
non-deterministic but the call to a C++ destructor isn't. That occurs as
soon as it exits its scope which you have complete control over. The
destructor is also syntactically cleaner than relying on a "using" statement
or calling "Dispose()" directly.
 
P

Peter Duniho

Larry said:
When an object's reference is no longer accessible, the call to "Finalize()"
is non-deterministic.

So what? The finalizer isn't part of the behavior of a correctly
written program. You should forget all about the finalizer. It only is
relevant when you have a bug.
GC is also a very expensive process.

I have not found it to be so and, even if it were, as I pointed out
whatever cost garbage collection has is at least partially, if not
completely, mitigated by the fact that releasing an object is a
constant-order operation (O(1)), whereas it's O(n) for conventional
free/alloc implementations (where n is the number of individually
referenced objects in the tree rooted by the object being released).

People who know a lot more about the garbage collector than I do have in
this very newsgroup provided very good explanations as to how very
efficient garbage collection actually is. The garbage collector doesn't
take nearly as much time to do its work as you seem to think it does.
[...]
The bottom line in any case is that you have no choice
but to release your resources explicitly if you can't wait for the GC to
invoke "Finalize()".

That is simply not true. As I have pointed out several times already,
the only time you need to release your resources explicitly is when you
have unmanaged resources. And that's an artifact of mixing GC with
free/alloc. It's not a problem inherent in garbage collecting.

One more time: if your object references only managed data, then the
instant your object is no longer referenced itself, ALL memory resources
referenced by that object are also no longer referenced, and are
released in the same instance the root level object is.

The finalizer is completely irrelevant and in fact would not even need
to exist in a purely garbage-collected environment. It exists only
because of the unmanaged, non-garbage-collected resources and it's
either a mistake or an outright lie to try to point to the finalizer as
some inherent problem with garbage collection.

I'll assume that in this case, it's just a mistake on your part. But
you really should stop bringing it up. It's just not relevant.
[...]
So not only is the code no "uglier", the lifetime of the object in .NET
much more exactly matches its actual use than it does in C++.

How do you arrive at that conclusion.

Because it's true. An object is eligible for garbage collection after
the last statement that references it. The garbage collection need not
wait for the variable referencing the object instance to go out of
scope, or be set to null, or anything else like that.

And of course, as I have already pointed out, the moment the object is
eligible for garbage collection, it is effectively released. It doesn't
have to actually be collected for it to be considered logically returned
to the collection of available memory resources.
The call to "Finalize()" is
non-deterministic but the call to a C++ destructor isn't.

And again, you need to forget about the finalizer. It's not relevant at
all in this part of the discussion.
That occurs as
soon as it exits its scope which you have complete control over. The
destructor is also syntactically cleaner than relying on a "using" statement
or calling "Dispose()" directly.

But those statements are also irrelevant to this part of the discussion.
They exist for explicit releasing of resources, but they aren't needed
unless you a) want to keep the object for later reuse, but release
resources (in which case it's exactly the same as C++) or b) you want to
ensure that unmanaged resources are also freed (in which case it's not
at all relevant to the question of the general paradigm of garbage
collection).

In other words, in neither case does "using" or Dispose() have anything
to do with any difference there might exist between the free/alloc
paradigm and the garbage collection paradigm.

Pete
 
L

Larry Smith

So what? The finalizer isn't part of the behavior of a correctly written
program. You should forget all about the finalizer. It only is relevant
when you have a bug.

Your're focusing the crux of your argument on some theoretical notion that
we live in a world without unmanaged resources. Well explain to me what
resources you think the .NET classes are handling behind the scenes. This is
..NET for "Windows", not .NET for "Peter's purely managed OS X". In the world
we both live in the finalizer is a fact of life. What do you think the
"Note" section here means for instance:

http://msdn2.microsoft.com/en-us/library/system.drawing.font.dispose.aspx

I'm not aware of any "bug" so should I still "forget all about the
finalizer". What if I'm holding an object that stores a network resource of
some type, possibly using a native .NET class that holds this. If I don't
explicitly release it then it might never get releasesd and my app might
eventually fail somewhere (after creating enough instances). In fact, even
your own objects should implement "Dispose()" whenever they store references
to other objects that implement "Dispose()". The focus of my previous posts
have been on this very issue. I'm talking about the (cleaner) syntax of
using a C++ destructor versus finalize/dispose for releasing *unmanaged*
resources.(not objects that live entirely on the managed heap and are
therefore cleaned up automatically).
 
J

Jon Skeet [C# MVP]

Your're focusing the crux of your argument on some theoretical notion that
we live in a world without unmanaged resources. Well explain to me what
resources you think the .NET classes are handling behind the scenes. This is
.NET for "Windows", not .NET for "Peter's purely managed OS X". In the world
we both live in the finalizer is a fact of life. What do you think the
"Note" section here means for instance:

http://msdn2.microsoft.com/en-us/library/system.drawing.font.dispose....

I'm not aware of any "bug" so should I still "forget all about the
finalizer".

<snip>

I suspect Pete's point is that in a program without a bug, the
finalizer won't be run - because Dispose will have been called
explicitly. Finalizers usually just provide a safety net *in case* you
have bugs.
What if I'm holding an object that stores a network resource of
some type, possibly using a native .NET class that holds this. If I don't
explicitly release it then it might never get releasesd and my app might
eventually fail somewhere (after creating enough instances).

So in that case, you have a bug. If your code fails to call Dispose on
something that implements IDisposable (and does so accidentally,
rather than through absolute and correct knowledge that Dispose does
nothing for that case) then you have a bug, and a finalizer *may* get
you out of a scrape.
In fact, even your own objects should implement "Dispose()" whenever
they store references to other objects that implement "Dispose()".

Absolutely, although in those cases they shouldn't usually implement a
finalizer as well.

Jon
 
L

Larry Smith

What if I'm holding an object that stores a network resource of
So in that case, you have a bug. If your code fails to call Dispose on
something that implements IDisposable (and does so accidentally,
rather than through absolute and correct knowledge that Dispose does
nothing for that case) then you have a bug, and a finalizer *may* get
you out of a scrape.

The term "bug" in this context is used somewhat loosely however. Neglecting
to call "Dispose()" isn't a bug unto itself unless absolutely mandated by a
particular object. And even then it only becomes a latent bug as you alluded
to (which may never surface). Why invite trouble however. It's probably a
best practice to always invoke "Dispose()" given the intended purpose of
this function. In any case, this is just a side-issue to what we've been
discussing (C++ destructor syntax vs finalize/dispose)
 
J

John Duval

Chris said:
[What's a Memory Leak?]
The definition will vary, but even then, saying GC precludes them isn't
quite right.

As I said, my definition does.
There are all sorts of strange corner cases. Off the top of my head:
- Static variables are never collected off the high frequency heap
- The Large Object Heap isn't compacted

Do you mean there is some programming error that would cause that to
happen? Can you be more specific?
Think of the fun you could have allocating large byte arrays in static
constructors - memory would come off the LOH, and would likley never be able
to be reclaimed....

But those would be arrays the application still holds a reference to,
no? If not, why wouldn't they be able to be reclaimed?
If we bring in Win32 & Interop, many corner cases spring up, the most common
one being fragmentation due to pinning.

I'm specifically talking about memory leaks. There are, of course,
other ways to interfere with memory allocations, such as fragmenting the
heap. That's outside the scope of what I'm talking about.
I think that's a misleading definition.

Well, I'm happy to agree to disagree. But just as I suspect there's at
least one person who agrees with your viewpoint, I think it's likely
there's at least one person who agree with mine. We're talking about a
semantic issue here, and those are almost never black & white.

Even if you don't agree with a particular viewpoint, you should at least
take it into account.
A leak, even in C/C++ land, is generally characterized by an application
bug.

Agreed. But I don't agree that all memory-related bugs are examples of
"leaks". A leak is a bug, but not all bugs are leaks.
[...]
At the very least, it seems to me that in your presentation you should
make clear the differing definitions of "memory leak".
Nah. These are usually people new to .Net, and the level for them is just
right. More detail just becomes confusing...

Your choice, of course. However, from my own personal point of view,
the fact that I might be new to .NET does not negate any previous
experience I might have, nor does it change how I view the definition of
a "memory leak".

Whether that's an issue in your presentation depends more on how many,
if any, of your audience shares my viewpoint than on how much, if
anything, they already know about .NET.

Pete

Chris,
For what it's worth, I'm with Pete on this one. It's probably due to
the fact that I have C++ background, but to me a memory leak means a
very specific thing, which is that the memory has been orphaned and
there is no reference anywhere that can be used to recover it. Not
all increases in memory usage are memory leaks.

Even if I were new to .NET, I think it would be helpful to be a little
more specific about your definition of memory leak. Even people who
are new to .NET have heard that one of the big benefits is garbage
collection which prevents memory leaks.
John
 
M

Marc Gravell

Neglecting to call "Dispose()" isn't a bug unto itself unless
absolutely mandated by a particular object.

IMO, implementing IDisposable is just such a mandate - by the
encapsulation principle, i.e. you shouldn't know or care what goes on
under the covers.
It would be nice if the language had some way of enforcing this, and
transferring ownership in the case of factory methods, or handing
ownership to a wrapper object that was itself IDisposable. Oh well.

Marc
 
L

Larry Smith

IMO, implementing IDisposable is just such a mandate

Probably the most reasonable interpretation anyway.
 
B

Ben Voigt [C++ MVP]

Chris,
For what it's worth, I'm with Pete on this one. It's probably due to
the fact that I have C++ background, but to me a memory leak means a
very specific thing, which is that the memory has been orphaned and
there is no reference anywhere that can be used to recover it. Not
all increases in memory usage are memory leaks.

I know exactly what you're getting at, but you and Peter are both wrong.

A memory leak is memory that remains unavailable for reuse after it is no
longer needed. This is the only definition of memory leak that makes sense.
After all, by your definition, this isn't a memory leak (C++):

void f(void)
{
int* p = new int[1024];
}

Of course that is a memory leak. And so is this (C#):

class C
{
private static int[] a;
static C() { a = new int[1024]; }
}

C c = new C();

These cases are *identical*. Both int arrays are still accessible (in
native code, via HeapWalk, in managed, via reflection on the Type object
which, once loaded, is never freed until the AppDomain unloads), but are
also totally useless in the context given.
 
B

Ben Voigt [C++ MVP]

I don't understand that at all. A person who isn't used to releasing a
reference to an object when they are done with it isn't going to be used
to deleting a C++ object when they are done with. Conversely, a person
who can remember to delete a C++ object when they are done with it can
remember to release a reference to a .NET object when they're done with
it.

Not "a person who can remember to delete a C++ object:". A good programmer
knows how to use a reference-counting smart pointer, where cleanup is just
as automatic as with a garbage collector (and sometimes moreso). A smart
pointer as a static member is just as bad as with GC, but at least C++ smart
pointers do the right thing with local variables and exceptions or early
exits -- automatically.
 
B

Ben Voigt [C++ MVP]

Peter Duniho said:
But those two functions aren't doing the same thing.

The C# equivalent to the CPlusPlusFunc() you posted is this:

void CSharpFunc()
{
MyExpensiveObject obj = new MyExpensiveObject();

// ...
}

No, because Larry's is exception-safe.
 
B

Ben Voigt [C++ MVP]

Marc Gravell said:
IMO, implementing IDisposable is just such a mandate - by the
encapsulation principle, i.e. you shouldn't know or care what goes on
under the covers.
It would be nice if the language had some way of enforcing this, and
transferring ownership in the case of factory methods, or handing
ownership to a wrapper object that was itself IDisposable. Oh well.

The language does. Well, C++/CLI does at least. A CLI/C++ class
automatically implements IDisposable to call Dispose on every member object
that implements IDisposable.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top