Andrey,
See inline:
Lately i've been (and still am) fixing some memory leaks problems in the
project i just took over when i got this new job. Among the other issues
i've noticed that for localy created objects it makes difference to
explicitly put them to null after working with them is done - it somehow
makes the GC collect them sooner, like here:
void SomeFunc()
{
MyClass c = new MyClass();
c.CallSomeOtherFunc();
c = null;
}
This depends on how it is compiled. If you are compiling in debug mode,
then yes, it will make it GC faster (although how much faster is
questionable since c would have been released very quickly afterwards, due
to exiting the method). The reason for this is that code compiled in debug
mode will not have references released when they are no longer used. When
compiled for release mode, however, you are actually ^extending^ the
lifetime of your object with the "c = null" assignment. If the statement
was not there, the compiler would realize that the object pointed to by c is
no longer needed after the call to CallSomeOtherFunc (unless the reference
pointed to by c is stored somewhere else in the call to CallSomeOtherFunc),
and then set it to null, allowing it to become eligible for GC.
However, with the "c = null" statement, the compiler realizes (although
I don't know if agressive optimizations here can pick it up) that it is
needed for another call, and the object pointed to by c isn't made eligible
until ^after^ the assignment.
If i don't put "c" to null time to get it collected is way longer than if
i explicitly dereference the object. It's really strange, as it's obvoius
that object's lifetime is limited by body of "SomeFunc".
This isn't true. GCs are not pre-determined. Why a GC occurs very
quickly in one run of the app as opposed to very late in another run of the
app is not easily determinable. There are other factors outside your
application which can force a GC as well. The timing of one GC in one run
of the app vs a GC of another run in an app means very little.
But anyway, my question is: is there any difference in the first example
compared to this:
void SomeFunc()
{
using (MyClass c = new MyClass())
{
c.CallSomeOtherFunc();
}
}
The second way looks nicer for me and avoids forgetting to explicitly
dereference the object,
but i'm not sure if this second case is being treated the same way as the
first one.
This is a separate issue completely. Just because you implement
IDispose, it doesn't mean that the object doesn't need to be GCed anymore.
If you have IDispose, it means that you need to manage the lifetime of the
object in some way by calling the Dispose method on the implementation of
IDispose. There are usually two reasons you implement IDispose.
The first, and most common, is because the instance is holding on to
some unmanaged resource, and you need to release it as soon as possible.
When implementing IDispose in this scenario, you also need to implement a
finalizer (destructor, although the term is incorrect, IMO). The finalizer
is meant to be a safeguard if people don't call Dispose. It will basically
release the unmanaged resource if it hasn't been done already.
Finalization is an expensive process. The object is basically
ressurected and placed in a finalization queue, and then has the finalizer
executed. If Dispose is called on an object, it doesn't need to be
finalized, which is why you see a call to GC.SuppressFinalize in the
implementation of Dispose, and not the finalizer itself. Since the
finalizer and Dispose do the same thing, if Dispose is called, you can
effectively tell the GC to not finalize you, and avoid that costly overhead.
However, even if you say that you are going to suppress finalization,
the object still has to be GCed. The process of having the memory reclaimed
still has to occur on this object, even when Dispose is called. However,
it's something you really don't care about, because it's just memory, and
you gave up control of that (for the most part) by agreeing to run in the
CLR (that's the purpose of the GC, to worry about these things, not yours).
Suppressing finalization only means the finalizer will not be called, it
doesn't mean that the object has been GCed.
The second reason to use IDispose is because you need some deterministic
finalization to take place. For example, you might always need some
variable set back after an operation in a method. In this case generating a
class which implements IDisposable might be a good idea, since you can
revert the variable back when the using block is exited (instead of having
to explicitly remembering to code a try/catch/finally block).
In your case, you fall into the first category for using IDispose.
Using the using statement will be important, if you actually have an
unmanaged resource you are holding on to. If you do not, and it is just a
regular class, then using the using statement will not work, and
implementing IDispose will not help either.
Hope this helps.