Idea: GC and IDisposable

?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Hilton said:
Just holding a reference to it will keep it alive as it always has.

Hilton

Just holding a reference is not enough. You have to actually use the
reference.
 
K

KWienhold

Not necessarily. From the docs for IDisposable:

<quote>
Use this method to close or release unmanaged resources such as files,
streams, and handles held by an instance of the class that implements
this interface. This method is, by convention, used for all tasks
associated with freeing resources held by an object, or preparing an
object for reuse.
</quote>

Note the "or preparing an object for reuse". I can't think of any
examples where that actually *is* the use of IDisposable, but it would
be legal according to the docs.

As usual, you are right of course ;)
Its just that I have never come across IDisposable being used in this
fashion, so it would at least raise a red flag in my mind.
I never actually noticed this in the documentation, but its something
to keep in mind. Up to now I would have not thought of implementing
IDisposable to indicate an object that could be cleaned and then
reused.
I'm still left wondering wether this would be a good idea though,
since it elude other people reading my code and having a seperate
method for that case seems clearer.

Kevin Wienhold
 
D

Doug Semler

Doug said:
[...]
Whoa, wait. Finalize should NEVER call Dispose to clean up managed
resources, since the maaged resource references may no longer exist!

I don't disagree. But I was just assuming that we were talking about
unmanaged resources here. After all, those are the ones that cause the
most trouble if not disposed properly.

Did I miss something?

Huh..*we* may have known that, but some others lurking in the thread
may have interpreted the statement made to mean that it is safe to
call implement a Finalizer that calls Dispose() without regard to
whether the Finalize() method was doing the calling.

I actually like the C++/CLI's compiler's way of "automagically"
implementing the IDisposable pattern if you insert into your classes:

~ClassName()
{
// Dispose managed objects
this->!ClassName(); // call Finalizer
}

!ClassName()
{
// Clean up unmanaged stuff
}

The compiler is nice enough to implement IDisposable, Dispose(bool),
proper calling sequence, and just for good measure a
GC.SuppressFinalize in there on the explicit Dispose call <g>
 
G

Guest

The GC handles cleaning up of managed resources just fine as far as I
For any Compact Framework object containing an Image or a Bitmap it's really
useful, as those classes holdnative resources that you often want to get rid
of as soon as possible in a memory-constrained device.

Also maybe for controlling the lifetime of something in a using{...}
pattern? Maybe?


--

Chris Tacke, Embedded MVP
OpenNETCF Consulting
Managed Code in an Embedded World
www.OpenNETCF.com
 
J

Jon Skeet [C# MVP]

For any Compact Framework object containing an Image or a Bitmap it's really
useful, as those classes holdnative resources that you often want to get rid
of as soon as possible in a memory-constrained device.

Also maybe for controlling the lifetime of something in a using{...}
pattern? Maybe?

I would imagine that KWienhold meant that you don't (usually)
implement IDisposable on a class which only references managed
resources, directly or indirectly. If you've got a class which itself
"owns" some other IDisposables, that container class should also
implement IDisposable.

Jon
 
G

Guest

Certainly, but again for any lurkers I want it to be clear that there are
good reasons for a class that contains only managed classes to still
implement IDisposable.

I must say, cross-posting in groups other that just the CF (where I and the
OP typically reside) brings in some fresh perspectives and provides some
great insights. We should do this more often.

--

Chris Tacke, Embedded MVP
OpenNETCF Consulting
Managed Code in an Embedded World
www.OpenNETCF.com
 
?

=?ISO-8859-1?Q?G=F6ran_Andersson?=

Hilton said:
Sure there is. How would you call Dispose? You'd call it on an object -
right? That means that it would not be GC'd (yet) since you are holding a
reference to it, so everything works just fine as it always has.

Altually, holding a reference is not enough to keep an object alive. You
have to actually use the reference for it to count as an active
reference. As soon as the GC can determine that the reference will never
be used again, the object can be garbage collected.

With your suggestion, to keep an object alive you would have to call
GC.KeepAlive(obj) where you would now call obj.Dispose(). It's no less
work for the developer, and the GC would have to work a lot harder.
Now when
this object can be GC'd (no refs etc), then 'my' logic kicks in, disposes
the object and frees the memory ***IF*** it hasn't already been disposed.
It's just a safety net, not a new paradigm.

Your code is not affected, you can continue to optimize, but the GC will
dispose of any objects that need to be disposed, but weren't. Sounds good
to me.

A finalizer is usually used as a safety net in the Disposable pattern,
to at least try to free the unmanaged resources in case someone forgets
to call Dispose. The finalizer is never supposed to be used, however, as
it uses more resources than calling Dispose. In this pattern the Dispose
method calls GC.SupressFinalizer to remove the object from the
finalization list.

If the Dispose method is not called, the object remains in the
finalization list. When the GC is about to collect the object, it has to
put it in the FReachable queue instead, where a separate thread will
eventually execute the finalzer before the object can be collected. As
the object can't be collected right away, it might also have to be
promoted to the next GC generation, which means that the entire object
is moved in memory.

If you want to use the finalizer instead of the Disposable pattern, the
GC would have to become a lot more aggressive when running the
finalzers, starting a lot of threads with rather high priority to ensure
that it doesn't take too long before the finalizers are executed. This
of course means that the performance of an application would be much
less predictable, as all finalizers would have to run right away,
instead of when there is low pressure on the system.
 
B

Ben Voigt [C++ MVP]

Doug Semler said:
Doug said:
[...]
Whoa, wait. Finalize should NEVER call Dispose to clean up managed
resources, since the maaged resource references may no longer exist!

I don't disagree. But I was just assuming that we were talking about
unmanaged resources here. After all, those are the ones that cause the
most trouble if not disposed properly.

Did I miss something?

Huh..*we* may have known that, but some others lurking in the thread
may have interpreted the statement made to mean that it is safe to
call implement a Finalizer that calls Dispose() without regard to
whether the Finalize() method was doing the calling.

I actually like the C++/CLI's compiler's way of "automagically"
implementing the IDisposable pattern if you insert into your classes:

~ClassName()
{
// Dispose managed objects
this->!ClassName(); // call Finalizer
}

!ClassName()
{
// Clean up unmanaged stuff
}

The compiler is nice enough to implement IDisposable, Dispose(bool),
proper calling sequence, and just for good measure a
GC.SuppressFinalize in there on the explicit Dispose call <g>

And automatically Dispose member objects (if declared without the tracking
handle modifier).
 
H

Hilton

Jon,

Your post raises an interesting point which I have often thought about. If
a class implements IDisposable, the developer then has a license to later
add the use of unmanaged resources, and vice versa. For example, one Bitmap
construct uses managed resources and another uses unmanaged resources. But
ignore that for a minute, and let's say that Microsoft originally shipped
the Bitmap class with just one construct that used managed resources and
therefore the class did not (need to) implement IDisposable. Then later on,
they decide that the Bitmap class should use unmanaged resources throughout
for performance (even the construct that only used managed resources) -
they're screwed because exsisting apps won't know (or be able) to call
IDisposable without being recompiled and all apps that used the new .NET
would have this memory leak. Note: I'm just using a hypothetical Bitmap
class as an example here - let's not go off on a tangent speaking about the
actual Bitmap class. The point is that once a public library class does not
implement IDisposable to free unmanaged resources, it never can (and expect
the right things to happen).

Gee, I've just made a great argument in favor of the OP proposal. :)

Fire away.

Hilton
 
J

Jon Skeet [C# MVP]

Your post raises an interesting point which I have often thought about. If
a class implements IDisposable, the developer then has a license to later
add the use of unmanaged resources, and vice versa. For example, one Bitmap
construct uses managed resources and another uses unmanaged resources. But
ignore that for a minute, and let's say that Microsoft originally shipped
the Bitmap class with just one construct that used managed resources and
therefore the class did not (need to) implement IDisposable. Then later on,
they decide that the Bitmap class should use unmanaged resources throughout
for performance (even the construct that only used managed resources) -
they're screwed because exsisting apps won't know (or be able) to call
IDisposable without being recompiled and all apps that used the new .NET
would have this memory leak. Note: I'm just using a hypothetical Bitmap
class as an example here - let's not go off on a tangent speaking about the
actual Bitmap class.

That would be a breaking change to the class, as far as I'm concerned.
They would shy away from that, creating another class instead.

Changing classes to implement IDisposable retrospectively is a major
versioning change, IMO.
The point is that once a public library class does not
implement IDisposable to free unmanaged resources, it never can (and expect
the right things to happen).
Indeed.

Gee, I've just made a great argument in favor of the OP proposal. :)

I don't see how you have. Why does this fact support the proposal?

Jon
 
B

Brian Gideon

The point is that once a public library class does not
implement IDisposable to free unmanaged resources, it never can (and expect
the right things to happen).

Nevermind the fact that adding *any* interface after the fact could be
a version breaking change. That's why the Framework Design Guidelines
book recommends that you to choose them wisely right from the get go.
 
H

Hilton

Jon said:
I don't see how you have. Why does this fact support the proposal?

Because when the Bitmap get's GC'd, the GC will do a "if (o 'implements'
IDisposable && !o.Disposed) o.Dispose ()" and even those this Bitmap object
won't be Disposed deterministically, it will be Disposed at least when/if an
OutOfMemory exception gets thrown.

Please recall, I'm not saying that my proposal is perfect and that we should
change our code to use if (it requires no code change in the apps). All I'm
saying is that if the GC add the "if" as stated above, it would act as some
level of safety net and catch undisposed objects.

I think we're kidding ourselves to think that you, me, and all the other
..NET engineers in the world will always write code to do the right thing. I
really like "using ()" and use it all the time, but there are some objects
whose lives cannot be bottled up in a few lines. One little bug and whammo,
a LOT of memory potentially gets leaked. I cannot even count the times
people have posted here about being out of memory only to find that they
didn't know to Dispose a Bitmap object. The purist in you will jump up and
down and say "well they should have, they screwed up, and they didn't call
Dispose() - their fault.". Now if only we all lived in a perfect world
where one perfect engineer was solely responsible for one perfect project...
However, multiple engineers work on multiple projects and one missing
Dispose() can cost a company millions. Then when we get to finding the
problem, we're back to finding the problem-causing 'malloc-free' we're all
so fond of. :|

Why not add a safety net to make a C# application more robust, less leaky,
stay up longer, etc? Is it a band aid? Absolutely.

Hilton
 
R

Robert Simpson

There are basically two types of cleanup possibilities for objects. Objects
which implement a Dispose pattern, and objects that have a Finalizer. There
are strict, documented rules regarding finalizers -- rules that Disposable
objects do not have to obey. There are very good reasons for this.

Having the GC call Dispose() would be a disaster, because Dispose() is
documented to behave differently and have less restrictions on it than a
finalizer. Consider the following problems regarding having GC call
Dispose():

1. If GC were to call Dispose() then it also has to manage a complete graph
of the object(s) that object holds references to -- dunno if it already
does this, but in order to call Dispose() everything that Dispose() might
have access to must be alive and uncollected. Furthermore, there's no
guarantee that any outside references beyond the scope of the graph that
Dispose() might call will be there. Imagine if GC collects object B, which
cleans up a static resource, then GC cleans up object A, which calls a
static method on object B which uses the static resource cleaned up earlier.

2. GC runs in a separate thread. If you have any [ThreadStatic] objects in
your class that you dispose of in Dispose(), they will not be cleaned up.
Furthermore, you open yourself to potential race conditions on resources.
For example, Dispose() in object A calls a method in object B, which removes
object A from its internal collection at the same time your main program
thread calls another instance of object A to add an item to said collection.
You never needed to worry about multi-threading in your single-threaded app
before, but now you have a mysterious error that seems to only crop up in
release mode and only under stress.

3. You could quite easily lock the GC thread. A poorly-designed program
could easily lock on a mutex or event, or pop up a dialog, freezing the GC
thread permenantly or semi-permenantly.

In short ... leave it as-is ... IDisposable and finalizers were fairly well
thought-out and have specific reasons for existing and specific (and
different) limitations on them.

Robert Simpson
 
J

Jon Skeet [C# MVP]

Hilton said:
Because when the Bitmap get's GC'd, the GC will do a "if (o 'implements'
IDisposable && !o.Disposed) o.Dispose ()" and even those this Bitmap object
won't be Disposed deterministically, it will be Disposed at least when/if an
OutOfMemory exception gets thrown.

That's already the case, because Bitmap has a finalizer (via Image)
which calls Dispose.
Please recall, I'm not saying that my proposal is perfect and that we should
change our code to use if (it requires no code change in the apps). All I'm
saying is that if the GC add the "if" as stated above, it would act as some
level of safety net and catch undisposed objects.

Assuming that all classes which *directly* hold unmanaged resources
(and there should be very few of those) implement a finalizer
appropriately, I don't think your proposal adds anything.
I think we're kidding ourselves to think that you, me, and all the other
.NET engineers in the world will always write code to do the right thing. I
really like "using ()" and use it all the time, but there are some objects
whose lives cannot be bottled up in a few lines. One little bug and whammo,
a LOT of memory potentially gets leaked.

No, memory doesn't potentially get leaked - the GC handles that.
Unmanaged resources should be handled by the class which directly holds
them, and I'd hope that any developer writing such a class (I can't
remember the last time I did it myself) knows enough to implement a
finalizer.
I cannot even count the times
people have posted here about being out of memory only to find that they
didn't know to Dispose a Bitmap object.

I can't remember seeing that, except for knowing that if you have run
of out Windows handles, GDI+ throws an OutOfMemoryException which gives
the wrong impression. In that situation the GC won't have been called
anyway, so your proposal does no good.
The purist in you will jump up and
down and say "well they should have, they screwed up, and they didn't call
Dispose() - their fault.". Now if only we all lived in a perfect world
where one perfect engineer was solely responsible for one perfect project...
However, multiple engineers work on multiple projects and one missing
Dispose() can cost a company millions. Then when we get to finding the
problem, we're back to finding the problem-causing 'malloc-free' we're all
so fond of. :|

Why not add a safety net to make a C# application more robust, less leaky,
stay up longer, etc? Is it a band aid? Absolutely.

It already exists in the form of a finalizer.

What do you believe your proposal adds to the current finalization
strategy?
 
G

Guest

So you're suggesting that the model of a collection moves from this (these
are all CF GC, so not generational):

- Walk roots
- Mark all objects with a root
- Sweep all objects not marked and without finalizers
- move all non-marked objects with finalizers to freachable queue and
re-root
- Optionally compact heap
- Optionally shrink heap
- spawn finalization thread

to this:

- Walk roots
- Mark all objects with a root
- Sweep all objects not marked and without finalizers
- Call Dispose on non-marked, disposable items without finalizers
- move all non-marked objects with finalizers to freachable queue and
re-root
- Optionally compact heap
- Optionally shrink heap
- spawn finalization thread

So collection would have to wait for all Dispose calls to complete. Serious
perf impact there that I couln't accept.

Or this:
- Walk roots
- Mark all objects with a root
- Sweep all objects not marked and without finalizers
- move all non-marked IDisposable objects to some new "dispose queue" and
re-root
- move all non-marked objects with finalizers to freachable queue and
re-root
- Optionally compact heap
- Optionally shrink heap
- spawn Dispose thread
- spawn finalization thread

How do the Dispose and Finalizer thread make sure they don't have contention
or deadlocks when an object is IDisposable and has a Finalizer? Or would
all Dispose calls complete before the finalization thread starts? You
realize that this also means that anything that implements IDisposable will
require 2 collections to free (just like something with a Finalizer)?

With the current GC architecture:

If an object has unmanaged resources, it should have a finalizer that in
turn calls Dispose if necessary. So when the object is finalized, resources
are released. Your solution would not fix anything here as that finalizer
is already going to get called after the first GC following it losing all
roots. All managed resources will be released during the next after that.

If the object has managed resources and no finalizer, the GC will collect
them on the first GC following it losing all roots.

If the object has native resources and no finalizer, it's a
bug/implementation error.

The only possible benefit of your architecture would be to allow both
managed an unmanaged resources to be released in a single GC cycle if the
object had a finalizer, provided that the object's finalizer and Dispose
methods are written properly to take advantage of that. You already can
have that advantage by implementing the finalizer/Dispose properly and just
calling Dispose in your app.

Failure to call Dispose will not cause a memory leak (unless the Disposable
class has a bug in it), it simply shifts the time at which resources are
freed from being app controlled to GC controlled. The still get released in
either case. If the failure to call Dispose in an app causes a company to
lose millions of dollars, I'd posit that said company needs to re-visit how
its architecture could be so unbelievably broken as to allow such a thing.


--

Chris Tacke, Embedded MVP
OpenNETCF Consulting
Managed Code in an Embedded World
www.OpenNETCF.com
 
G

Guest

2. GC runs in a separate thread.

Not true in the case of the CF. GC runs in the context of the thread that
made the allocation that caused GC to occur. All other managed threads in
the AppDomain are suspended during the entire collection cycle (hence my
major concerns about performance of this suggestion).
3. You could quite easily lock the GC thread. A poorly-designed program
could easily lock on a mutex or event, or pop up a dialog, freezing the GC
thread permenantly or semi-permenantly.

Again, see my comment above.
In short ... leave it as-is ... IDisposable and finalizers were fairly
well thought-out and have specific reasons for existing and specific (and
different) limitations on them.

Agreed. Disposition is not the GC's job and it shouldn't be looking at it.


--

Chris Tacke, Embedded MVP
OpenNETCF Consulting
Managed Code in an Embedded World
www.OpenNETCF.com
 
G

Guest

I cannot even count the times
I can't remember seeing that, except for knowing that if you have run
of out Windows handles, GDI+ throws an OutOfMemoryException which gives
the wrong impression. In that situation the GC won't have been called
anyway, so your proposal does no good.

He's talking about the CF Bitmap, explained in depth here:
http://blogs.msdn.com/scottholden/archive/2006/08/22/713056.aspx
And here:
http://blog.opennetcf.org/ctacke/PermaLink,guid,987041fc-2e13-4bab-930a-f79021225b74.aspx

And as I stated in the second link, I think it's a bug in the Bitmap (and
I've voiced the concern personally with Scott - not sure if CF 3.5 fixed
it). Altering the GC to fix an implementation bug would be just plain
silly.


--

Chris Tacke, Embedded MVP
OpenNETCF Consulting
Managed Code in an Embedded World
www.OpenNETCF.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top