It helps because it makes clear that even though you've selected a
specific implementation for the variable (i.e. "TextBox"), all that the
subsequent code cares about is that it's a base type (i.e. "Control").
Just because the whole body of a method "is one big implementation
detail", that doesn't mean that the concepts of abstraction and
simplification aren't useful within it.
Using the code I posted, an example of why this might be useful is if one
uses TextBoxBase, or even Control, as the variable type, but creates a
TextBox. Later, if needed, someone can come along and change from TextBox
to RichTextBox, without having to worry that there's a use of the variable
somewhere that depended on the implementation being TextBox.
A similar example would be in dealing with Streams or TextReaders (to name
a couple of i/o-related classes that have multiple possible implementors
one might choose from, and which could change over the course of some
code's lifetime).
That's my point precisely. I do argue that, with methods that are
small enough to be properly readable, there is no such advantage in
practice. For your first example, if there was a use of the specific
member of type TextBox, then we can safely assume that it was either
because it was needed there, or because it was too convenient to use
to ignore it.
To further illustrate the point, I'll consider another commonly
mentioned example - IList<T> instead of List<T>. So what benefits
would I get from using the former instead of the latter _for locals_?
Supposedly that, at some later stage, I can easily plug in a different
implementation. But how often did you actually need that? On the other
hand, consider how much shorter the code can be when using List<T>
convenience methods over generalized IList<T>... it could be said that
it's an atypical example because IList<T> should really offer most if
not all operations available on List<T> in some generic way; but, on
the other hand, if the code only uses operations that are available
through the interface, then it doesn't matter if it uses type
inference or not - you can still switch types. If you can't switch
types because the code uses some member of a derived type, then the
types just aren't switchable, inferred or not; and the member is
likely used for a reason.
In addition to that scenario, there's also the fact that writing the code
this way can not only help in the event that the code using the variable
winds up being refactored into a method used by multiple places, it can
actually _encourage_ doing so.
I'm not sure I follow here. How using or not using type inference
impacts the ability to refactor code? Do you mean that with type
inference it may be harder to spot fragments of code that could be
reused?
Finally, there's the general philosophical belief (which you may or may
not share) that one should simply stick to the most-constrained usage of
an object that fits with what's actually needed. Following that
philosophy causes the code to be more general, more reusable, and more
maintainable. Just because one is working within a method body insteadof
some other context doesn't change the usefulness of that philosophy.
I think I've already covered that one above. I consider the use of
supertypes over concrete types in non-exposed (i.e. private)
declarations a form of premature pessimization - you're paying for
something that you're very unlikely to benefit from in the future. If
you ever need to change the type, then go ahead and change it, and fix
the code broken by the change - when the dependencies are localized
(which is particularly true for locals), the cost of such fix is far
less than the cost of overgeneralized coding in the first place.
I can't say that nested generics have ever made my eyes bleed. Seems like
a non-issue to me.
I think we'll just have to agree to disagree, then. But it would seem
that a lot of people find 40+ chars long type names not particularly
readable - there's a reason why "auto", the C++0x equivalent to "var",
is cheered by the majority of C++ developers, who are tired of having
But in any case, I don't think I used the word "readability" anywhere in
my comments. I'm talking about understanding the code, not how easy itis
to write or read.
You believe that making the code harder or easier to read does not
impact the ease of understanding it?