S
stork
Very deep problems.
Consider the example.
y = F(F(x))
Given y is type A, in a system where there are types A and B.
F(x) is defined for
A F( A )
A F( B )
B F( A )
y = F(F(x))
Let x be of type A
can resolve to either
y = F( F<B>( A ) )
or
y = F( F<A>( A ) )
This is actually avoidable. The language would force a rule that says
a function return may not be used as a parameter to another function.
You would always have to assign through temporaries and that would be
safe. The only problem that comes into play though, would be for
overloads of things like the . or the -> operator, which are
effectively functions themselves.
If I have X -> operator
and Y -> operator
Then I would not be able to write X->Y->Z without introducing
ambiguity. You would then have to write something with a lot of
temporaries.
A Y = X->Y
B Z = Z->Y
which, honestly, is a lot of typing, even though it might make for
better practice as it is easier to debug.
Consider the example.
y = F(F(x))
Given y is type A, in a system where there are types A and B.
F(x) is defined for
A F( A )
A F( B )
B F( A )
y = F(F(x))
Let x be of type A
can resolve to either
y = F( F<B>( A ) )
or
y = F( F<A>( A ) )
This is actually avoidable. The language would force a rule that says
a function return may not be used as a parameter to another function.
You would always have to assign through temporaries and that would be
safe. The only problem that comes into play though, would be for
overloads of things like the . or the -> operator, which are
effectively functions themselves.
If I have X -> operator
and Y -> operator
Then I would not be able to write X->Y->Z without introducing
ambiguity. You would then have to write something with a lot of
temporaries.
A Y = X->Y
B Z = Z->Y
which, honestly, is a lot of typing, even though it might make for
better practice as it is easier to debug.