A finalizer will always be called by the CLR when it is unreachable and the
GC runs a collection cycle; a dispose method is just another method that
must be invoked by user code. That being said, there are certain best
practices that have evolved that impact this. A finalizer is guaranteed
(under normal conditions) to get called (though you don't know when); a
Dispose is never guaranteed to get called, but is entirely under
programmatic control, so it can be deterministic.
If a class implements a finalizer it should also implement a Dispose. I
usually also do the converse unless there are good reasons for not doing so;
i.e. if it implements a Dispose I also implement a finalizer unless there
are thousands of these objects and they would swamp the finalization queue.
If a class implements the IDisposable interface and a user of that class
does not call its Dispose method I tend to treat that as a programming
error. The Dispose method should suppress finalization of the object. If the
finalizer actually does get called I output a trace message indicating that
the finalizer was called and I start a bug hunt to determine why the Dispose
was not called.
In terms of requirements, if a class encapulates an unmanaged resource,
then it should implement a finalizer to ensure that the resource is properly
cleaned up. It should also implement a Dispose to make the cleanup more
deterministic so that when the cleanup occurs is then under control of the
application instead of when the GC runs (i.e. random).
If a class owns other managed objects that implement Dispose then it should
also implement Dispose so it can invoke Dispose on its owned objects.
These are not universal, and some are controversial, but I find this to be a
reasonable set of "best practices".