[Top-posting undone for clarity.]
- said:
Hi Jon and Larry (I reply here once for both of yours)
First of all, thx for your feedback, and I have to say you are completly rigth, I was wrong.
Your willingness to go there encourages me to explain
why it works that way, below. I am going to expand
upon Jon's answer in an attempt to give a glimpse into
the reason type deduction should flow from leaves of
a parse tree toward its root.
The compiler behaves as expected by lang. spec..
Conclusion:
The ? : operator can not be used in polimorphic way. For example we can not write:
Fruit myFruit = flag ? new Lime() : new Orange();
But you can use a typecast to express your intent:
Fruit myFruit = flag ? (new Lime() as Fruit) : (new Orange() as Fruit);
Not as convenient, but as we will see, necessary.
The question is: What was the Architect reason to disable this?
Lang spec expects Orange to be type compatible (can implicitly cast) to Lime. This is a logic mistake, and very poor interpetation
of
polimorhism. Both Lime and Orange must be compatible with left side, but not with each other, because programmer never wanted to
convert them to each other, Then why the language spec want to do?
In logic term:
<lvalue> = <exp1> ?<exp2>?<exp3>
<exp2> compatible whith <lvalue> .And. <exp3> compatible whith <lvalue> does not .Implicates. that <exp3> compatible with <exp2>.
First, I will agree that for the simple example you show,
it seems reasonable that the type of <lvalue> would
participate in the sematic analysis of the conditional
expression. But it is the head of a trail to madness.
The situation now, as the C# language (and every other
strongly typed programming language I that know) is
designed, the type of an expression can be deduced
from its immediate subexpressions. This situation is
amenable to the semantic analysis that is needed both
to produce executable code and to permit people to
understand what their constructs mean.
To accommodate your expectation/criticism, it would
be necessary to have an additional or replacement
rule, where the type of subexpressions may be deduced
from the use to which its result will be put.
Making the reverse deduction rule an addition to the
current rule leads to some nasty conundrums. Where
in a parse tree should from-root-toward-leaf deduction
stop and from-leaf-toward-root deduction begin? For
your simple expression, where there is only a single
assignment at the root of the parse tree, the answer is
clear: Type deduction should flow up (toward the root)
from the LHS then down into the conditional expression
and hence down into the conditional's leaves. But what is
the rule that will govern more complex parse trees? Will
you make a special case for the assignment operator?
Consider a conditional expression used as an argument
to a function overloaded on that same parameter position.
Now there is more than one type that should flow down
into the conditional expression.
You could eliminate the problem with deciding where
type deduction changes direction by replacing the
current rule entirely. Even more madness lies down
that trail. If the rule is that type deduction always flows
from the root toward the leaves, it is possible to make
compilers follow it provided they apply a backtracking
algorithm to the task and more rules to disambiguate
the (possible) multiple choices implied by the need to
backtrack. This might lead to only some of the compiler
implementers and language designers being carried away
in straight-jackets, but the toll among language users will
be heavier. This is because they must (or should, if they
are concientious) understand what their code means. By
the time you have code difficult for a compiler designer
to handle, it is practially impossible for plain humans to
understand without the use of a computer.
Again thx for your corrections.
You're welcome.