Strategic Functional Migration and Multiple Inheritance

  • Thread starter Thread starter Guest
  • Start date Start date
Mark said:
Sure, but that wasn't mentioned as the problem under discussion.

Your scenario could still be handled with virtual inheritance, that would
leave C1 with just one A, B, A1, B1, and C base class object. The problem,
of course, is if the A1 derivation isn't under your control, and it wasn't
made virtual.

That's true, but in the real world you can start designing structures which get
so complex that no one can understand what is going on and maintenance becomes a
nightmare.

Simple programs can solve complex problems and I think that MI does not address
this.
 
Ian Semmel said:
That's true, but in the real world you can start designing structures
which get so complex that no one can understand what is going on and
maintenance becomes a nightmare.

You -can-, but that doesn't mean you must. MI can be misused more easily
than SI, it's true. But good programmers don't misuse language constructs.
As for bad programmers, I just try to keep away from them. :)
Simple programs can solve complex problems and I think that MI does not
address this.

Would you call C# a simple language? I wouldn't. Yet we both prefer it to
assembly (a very simple language indeed). Simplicity is not a virtue, in and
of itself.

That said, the one thing I don't like about composition over inheritance is
that it requires duplication. Duplication is a very simple way to solve
problems, but I don't care -- I still don't like it.
 
PPS. Perhaps Radek has a comment regarding the semantic context of
multiple
candidate selection?

He's thinking about it. In his spare time, which unfortunately went missing
some years ago and he has been unable to find it. And yes Jon, I have twins
as well. And I am not as eloquent as Shawnk.

In the meantime, I'll just try to illustrate my immediate concerns (hoping
Anders is listening???)

First, the problem generally lies in the analysis/design space - not
implementation. That's not to say that an MI inspired design does not
require an MI language, but that the problem arises because the real world
patterns I see and model are sometimes orthogonal and when it comes time to
implement, I am stuck with some workaround or other due to lack of MI. The
other problem is "artificial"; when creating custom Winforms controls, I am
stuck inheriting from the Control, and I cannot also inherit from a standard
architecture.

To illustrate, I have a vast framework of business widgets, components and
objects that comprise business systems, and are all abstract - they deploy
purely as WebServices. So I have an AJAX client and a Winforms client to
consume these services. I also have (from the Gupta days) a base abstract
client-side class that "speaks" to my WebServices, with a few
abstract/virtual methods to override in concrete classes such as TextBox,
CheckBox, etc; its NOT an interface, its a class with a fair amount of code
in it. Having had to turn this elegant design into some repeated code in 20
different visual classes was painful. MI would solve it.

But still the main issue is analysis - pattern recognition - modelling and
Business Object design. Its been 6 years since I've had the luxury of MI,
and I miss it almost daily. I feel strongly that its a case of not knowing
what you're missing (like eyes or ears for someone blind/deaf since birth).

Cheers,

Radek
 
Jon,

As always it a true pleasure to read, consider and answer the conceptual
points
raised in your comments.

I thought it best to consolidate my response to your excellent points in the
hope of moving our mutual controversial opinions towards a metrically proven
theory of functional normalization.

The following will hopefully be of some small benefit to help a few OOP
programers to better understand the problem of functional normalization within
the context of current C# expression mechanisms.

I have partioned the posts in this response so each is focused on a
fundamental
aspect of our discussion.

Shawnk

PS. Thanks again for your time and contributions in helping to develop a
(hopefully) better understanding of functional normalization in OOP and C#.
 
Radek,

I can't thank you enough for excellent point regarding functional
normalization.

I always use the term 'functionally orthogonal' but your 'normalization'
comments
are a much better articulation.

With that in mind could you review my comments in response to some of Jon's
points regarding the potential for metrics to measure MI and functional
normalization?

I look forward to hearing your thoughts on this matter.

Shawnk
 
[1] Communication as an accelerant to scientific inquiry

Hopefully a PHD/Masters student will read this while
they are still considering the subject of a their disertation.

The concepts in this post are gravitaing towards a potential
software tool based on a C# parser that could analyize interfaces
(in C#) and determine the amount of 'forced code replication'.

Though still somewhat nebulous at this point the Shawnk/Cerny
position of 'MI as a inherent requirement for a functionally
normalized code base in OOP' is a concept ripe for exploration
(a shameless plug :-).

Your mission???

MI propoents could use a champion student like you (yes you!!) that can
transform what others call merely an opinion into a validated metrically
proven
fact!!!

Clearly MI propoents have a strong intutive certainty that MI is not just a
language feature but a fundamental logical expression that is inherently vital
to a concept herein called 'functional normalization' (within the context of
this
thread - term coined herein by Shawnk/Radek-Cerny comments).

Like a 'message in a bottle' (reminds me of the old Police song) this humble
post will hopefully land on the right shore. That being someone who has the
darn time Anders, Jon, I and many others do not because of prior commitments.

I believe a simple (as disertation papers/projects go) thesis based on simple
character-count/token-count (information theory) of the language expression
(and
ignoring the output of the compiler) can detect 'forced repeated fragments of
source code' in any SI-Interface based lanaguage.

Proff would be an automated refactoring of the Interface based design into a
pusedo C# (roll your own specifcation/semantics) that uses MI. And yes (IMHO)
you can ignore the compiler output BECAUSE (and this is paramount) the
target/focus of the analysis is the logic contained in the logical
expression of
the medium (that is - the source code itself).

State space collision (see thread herein) can be regulated to a configurable
modality of the analysis. The intent being that the resulting (hypothetical
output) would have logical consistentency (based on a preffered implmentation
output of the compiler).

By posting the tool on the Internet and allowing (or not) results to
be streamed to centralized web server a portion of the world C# code
base could be analyzed for 'forced code replication' phenonmena.

An important part of the anaylsis/thesis is the intellectual
breakdown of the problem space using specific conceptual terminology
such as;

[A] Mulitple Inheritance
Virtual Inheritance
[C} Implementation Inheritance
[D} Functional Normalization
[E] Inherited Surface Exposure (expose parent APIs to child surface)
[F] Inheritance Threads/Lines ( Mechanism of Functional Normalization)
[G] Fully normalized code base (one with no diamond structures)
[H] Language Semantic vs Output Implemenation separation
Functional Unit (state/methods to produce some computing effect)
[J] Functionally Orthogonal threads of inheritance
[-] ... other problem definition and phenomena dissolution terms

Note that prior to code and even architecture the conceptual
terminology must be defined and must transcend any given
language specification (such as the C# spec). The foundation
language of the logic being human thought (English, etc).
Thus the 'roll your own' C# spec recommendation.

Metric Focus : There exists (MI proponet position) a 'Forced Code
Replication' due to
SI-Inteface expression limit (an artifically induced 'logical expression
envelop
limit' that prevents the 'inherent functional normalization' effect of MI from
its process of normalizing functional logic in OOP expressions.

[Summary of this post and overview of this thread]

Initially this thread was a informal litmus test of SFM (Strategic Functional
Migration) capability/understanding in the C# community. An analysis of the
resulting litmus (so to speak) exhibits traces of a fundamental proposition
(IMHO) which is;

Database normalization and data replication metrics have a
fundamental similary to functional normalization metrics in OOP
(object oriented Programming) code bases.

The above thoughts are presented in the hope that our community
(C#, the .NET programming community, OOP programers in general)
could develop a formalization of functional normalization
that would provide metrics of 'forced code replication' due
to perceived inadaquacies in a purely SI-Interface approach (should they
exist).

As a counterpoint aganist MI proproents this theory/metric-set could
(hypotetically) prove that MI is not as inherent as its proponents claim.

I would like to thank the reader for taking these thoughts
into consideration as well as taking the time to go through
what is clearly a very labourious process (in reading this thread).

Shawnk (a ubiquotous and ahameless MI proponent)

PS. I do this to better understand C#/C++ architecture tradeoffs.

PPS. I can later use this in executive/engineering disertations
 
Mark Wilden said:
You -can-, but that doesn't mean you must. MI can be misused more easily
than SI, it's true. But good programmers don't misuse language constructs.
As for bad programmers, I just try to keep away from them. :)

I guess the question is how much benefit there is in the real world
from MI compared with how much abuse there is. While it's reasonable to
keep away from "bad" programmers, I'm more worried about "average"
programmers (which all of us are a lot of the time, I suspect).
Would you call C# a simple language? I wouldn't. Yet we both prefer it to
assembly (a very simple language indeed). Simplicity is not a virtue, in and
of itself.

I'd say that C# 1.1 *is* a simple language. C# 2.0 is significantly
more complicated.
That said, the one thing I don't like about composition over inheritance is
that it requires duplication. Duplication is a very simple way to solve
problems, but I don't care -- I still don't like it.

Has anyone actually suggested that duplication is *good*? It feels like
it's a straw man here. If you start with the assumption that the lack
of MI *inevitably* leads to code duplication, it's a reasonable
argument - but I don't accept that assumption.
 
[2] Towards a metrically proven theory of functional normalization

[Shawnk] - Regarding MI
Some question as to (1) requirement Vs. (2) best solution could be made
and we should be clarify this (1) vs (2) issue before continuing.
[Jon]
Even if MI were the best solution, it is not required while it is not
the *only* solution.
[Shawnk]
Radek's excellent articulation of 'functional normalization' is
perhaps the most succinct term to embody the architectural intention
of MI. Which is, a minimal expression to aggregate class
functionality while still keeping the pre-existing (to the
aggregation) intrinsic functional composition (that classes provide)
intact.
[Jon]
Except that you get not just the aggregate but potentially the
interference. I'm already not a fan of overuse of inheritance - see

for more on this. I *generally* prefer composition to inheritance, even
without considering MI. MI just makes the same problem more complex,
IMO.

Loved reading the article and, generally agree with your
recommendations on Virtual Methods.

It would be very helpful to articulate a simple concise definition
of exactly what you mean by 'interference' above.

I printed (and marked up the article) into a PDF but could not
find the use of the word 'interference' in the Blog article
you noted above.

Also if you have any metrics (proposed or otherwise) to elucidate the
comparisons between architectural alternatives that would be helpful.

Shawnk

PS. Clearly overuse in not intended/desired. Merely the availibility
of MI in C#/Java would allow functional normalization in that (C#/Java)
expression medium where such expressions are appropriate/desired/intended.
 
[3] Metric of complexity - Character Count clarification
I have to say, I think that's a very silly way of measuring complexity.
Would C# be a "simpler" language by making all the keywords single
letters? I don't believe so. It *certainly* wouldn't be a more readable
language.
The complexity I'm talking about is how hard it is to think about the
object model, and I believe that MI makes the model more potentially
complicated.

:-)

'Character count' means to add up all the characters in a set of expressions.

Case in point.

Problem : Implement a functional unit Fu logically derived Fx and Fy. The
criterion
being that Fu MUST EXPOSE the full functional capabilities of Fx and Fy.

Context : Roll your own C# syntax for pseudo C# that supports MI just like C++

Solution S1 : MI approach

Solution S2 : Fully Embedded approach

Count the characters used in the full definition of each solution class S1
and S2.

The solution with the least character count is (by definition) a more
powerful,
effective, efficient and eloquent design. This is because Logical Human
Thought (LHT)
expressions were expressed more accurately in an OOP format with the least
'expression burden' or 'character count' (see note at end regarding
token count).

Informal proof : Provide a syntactic example of the key functional
aggregation expressions where:

Fx, Fy have a single method
Fx has void Do_x(int x)
Fy has void Do_y(int y)

No direct functional relationship exists between Fx, Fy.
Thus Do_x() and Do_y() are completely functionally orthogonal
and act and behave independently.

A composition functional relationship may exist within the Fu
behavioral life cycle (Such as a data access layer persistence phase
to make functional effects persist in time). This is irrelevant
to the structural aggregation issue.

[S1]

public class Fu : Fx, Fy

[S2]

public class Fu{
Fx x;
Fy y;

void Do_x(int x)
{
x.Do(x);
}

void Do_y(int y)
{
y.Do_y(y);
}

}

The characters counts are;

S1 - 19
S2 - 73

Which is close to a 10/100 difference or
one order of magnitude in 'expression burden'.
I have to say, I think that's a very silly way of measuring complexity.

To nit pick logical token count is better that a raw character
count since use of LLP (Low Level Patterns) would express 'Fx x;'
as 'Fx l_Fx_ins' or 'Fx l_fx;'. The logical point (character count/
token count) of a valid and meaningful metric is the point I was making.

If you cant' define a concise terminology (qualification) with
a matching metric set (quantification) into a coherent substrate
for logical thought then the resulting syntactic expressions,
used as examples to prove your point are 'pointless'.

You have forced the reader/listener/engineer to do something they
would rather not do.

By only presenting a syntactical argument (example) you leave
the burden of constructing the coherent substrate (the programming
paradigm forming the architecture of the syntactic expression) to
the reader/listener.

Engineering executives have little time and can not vet every
syntactic argument that comes their way. They do have time,
however, to go the WikiPedia (or any concise,fast,reputable reference)
and conceptually vet the logical points (Implementation Inheritance,
Virtual Inheritance, etc).

I must apologize has I thought my reference to 'information theory' would have
made the 'character count' metric clear. Hopefully the brief dissertation
above
clarifies what I mean in terms of metrics, their utility and the general
coherence of terminology as a logical substrate for syntactic expressions.

I also note, in passing, that those who use good terminology/metrics are
trying
to move (however effectively/ineffectively) away from opinion towards
scientific
fact. As you and I both agree our individual opinions are immaterial relative
to the actual science of logical expression.

In retrospect I realize that the inherent logic in good terminology is
obvious to some while (at the same time) its a bitch for others to 'connect
the dots' :-)

Ergo the need for syntactic examples to show/prove the logical substrate
of the intended functionality.

Thanks, as always, for your comments.

Shawnk

PS. I you have any alternate metrics please let me know :-)
 
Mark,

I too am a MI proponet and Jon's concern about mis-use is a complete
non-issue to me (with all due respect).

I would be very interested in your thoughts regarding the potential for
metrics to detect/measure forced code replication due to limitations
(hypotetical until formally proved) of the SI-Interface approach.

The issue of functional normalization (described in this thread) as similar
'code replication' issues that are almost identical (from an information
theory point of view) to data replication in non-normalized databases.

Any thoughts regarding functional normalization, MI, II (Implementation
Inheritance), VI (Virtual Inheritance), metrics and terminology would be very
appreciated.

I look forward to your comments and thoughts.

Shawnk
 
Jon Skeet said:
I guess the question is how much benefit there is in the real world
from MI compared with how much abuse there is. While it's reasonable to
keep away from "bad" programmers, I'm more worried about "average"
programmers (which all of us are a lot of the time, I suspect).

I would say that the "average" programmer depends on your working
environment. In my experience, the "average" programmers I've worked with
have been what I'd call "good." I've spent a lot of time with these kind of
programmers in C++, and haven't seen the nightmares that others describe --
that's all I can say.

I have a feeling that MI is one of those things that everyone thinks
that -other- people mess up. Those of us who are comfortable with it use it
appropriately. Those of us who aren't, are smart enough not to try to use it
as a hammer to use on ever problem that looks like a nail.
I'd say that C# 1.1 *is* a simple language. C# 2.0 is significantly
more complicated.

C# is simple? Compared to Pascal, assembly language, or C? I guess it
depends on your frame of reference.
Has anyone actually suggested that duplication is *good*? It feels like
it's a straw man here. If you start with the assumption that the lack
of MI *inevitably* leads to code duplication, it's a reasonable
argument - but I don't accept that assumption.

Yes, I do believe that lack of MI leads to duplication (at least in many
cases). If you want to inherit two interfaces, and also reuse implementation
of those interfaces, you have to delegate to at least one contained object.
Delegation implies one-line methods that forward calls to the contained
object. The delegating methods duplicate the interface (which is
unavoidable), but they have to do it in two directions, incoming and
outgoing.

Again, I've said in a previous post that it's not the world's biggest deal,
and that many, many VB, Java and C# programmers write perfectly good code
without MI. I just remember from my C++ experience that "mixing-in" behavior
(both interface and implementation), orthogonally, to a class was useful.
And the downsides that others report simply didn't occur in my (and many
others') programming.

It's a moot point in root-class languages like C# anyway.
 
Clarification:

The compositional functional relationship within the context of the
Fu behavioral lifecycle is when you call Fu.Do_x() and then call
Fu.Do_y() for the same instance.

Thus if Do_x() was a primary function and Do_y() is a persistance
function the call sequence -

fu.Do_y(get);
fu.Do_x();
Fu.Do_y(save);

is a example of what is meant by 'compositional functional relationship'
within
the Fu behavioral life cycle.

The logical separtion (in this example of english semantics - not syntax) of
the primary function from the persistence function in key in the ability to
seperate english terminology (primary function/persistence).

The underlying point is that any logically coherent substrate the forms a
paradigm for syntactic expression must, by definition, transcend the limits
of the syntactic limitations formed by any individual language specification.

A case in point:

To discuss functional normalization in the context of MI, VI, II, SI and
Interfaces transcends the specs for Java, C# and C++.

While the syntactic proffs and metrics are essential they can only be
intellectually described with a preceding technical english lexicon (MI, VI,
II, SE, etc).

I note this in passing to help clarify the difference between compositional
sequencing (in time) of functions operating on a state space from structural
aggregation of functionality expressed in such expression mechanisms as MI.

Shawnk
 
I printed (and marked up the article) into a PDF but could not
find the use of the word 'interference' in the Blog article
you noted above.

Sure: "unintended side-effects of having multiple inheritance, where
the two supposedly orthogonal concepts end up affecting each other in
unexpected ways".
Also if you have any metrics (proposed or otherwise) to elucidate the
comparisons between architectural alternatives that would be helpful.

Nope, no metrics at all. Frankly, I don't find metrics nearly as useful
as it sounds like you do.
PS. Clearly overuse in not intended/desired. Merely the availibility
of MI in C#/Java would allow functional normalization in that (C#/Java)
expression medium where such expressions are appropriate/desired/intended.

The availability of overloading the assignment operator in C# would
allow certain things too - but make the language much more potentially
confusing. I find that people tend to abuse powerful features if
they're available - witness the number of people using regular
expressions inappropriately - and that as such, a feature which adds
complexity to the language really has to add a large benefit too.
 
Shawnk said:
:-)

'Character count' means to add up all the characters in a set of expressions.

And that's exactly what I think is a bad idea.

The solution with the least character count is (by definition) a more
powerful, effective, efficient and eloquent design.

That may be by *your* definition, but I certainly wouldn't define it
that way.

1) Efficient code isn't always brief.
2) Powerful code isn't always brief.
3) Eloquent code isn't always brief - often code becomes longer but
easier to understand when a single long statement is broken into
several short ones, for instance. That may introduce extra variables
solely for the purpose of making the code readable.
To nit pick logical token count is better that a raw character
count since use of LLP (Low Level Patterns) would express 'Fx x;'
as 'Fx l_Fx_ins' or 'Fx l_fx;'. The logical point (character count/
token count) of a valid and meaningful metric is the point I was making.

Again, I don't think that token count is a good metric. For instance,
you could easily have a language which has a single token for "declare
first integer variable for the class" and another for "declare second
integer variable for the class", potentially reusing those tokens for
accessing the variables when in an appropriate context. By making
tokens mean different things in different contexts, you can end up
requiring fewer available tokens overall *and* fewer tokens for a
specific piece of code.

That doesn't make it a simpler language, or the code simpler - or more
eloquent.

To give another information-theory example: I believe that if we used
"3" as a number base, that would give the "best" integer base number
system in terms of compact information with a small number of tokens.
However, ask real *people* to use base 3 and they'll start screaming.
People aren't information theory - they're people.
If you cant' define a concise terminology (qualification) with
a matching metric set (quantification) into a coherent substrate
for logical thought then the resulting syntactic expressions,
used as examples to prove your point are 'pointless'.

I'm not trying to argue this in metrics of information theory, because
I don't believe such metrics give a good idea of the readability of
code. I believe it's to a large extent dependent on the reader, and
that what may be the most readable code for one intended audience may
well not be the most readable code for another audience. For instance,
code to manipulate a string with a regular expression may be absolutely
great for people who use regular expressions day-in-day-out, but could
be much harder than a short series of simple string operations for
other people.
I must apologize has I thought my reference to 'information theory'
would have made the 'character count' metric clear. Hopefully the
brief dissertation above clarifies what I mean in terms of metrics,
their utility and the general coherence of terminology as a logical
substrate for syntactic expressions.

We apparently still disagree about the usefulness of the metric,
however.
I also note, in passing, that those who use good terminology/metrics
are trying to move (however effectively/ineffectively) away from
opinion towards scientific fact. As you and I both agree our
individual opinions are immaterial relative to the actual science of
logical expression.

No - I believe opinions are very important, and far more important than
arbitrary metrics such as character or even token counts. One could
construct a language which from a purely theoretical point of view was
absolutely beautiful - but which was terrible to actually use. I'd far
rather have a language which mere mortals like myself can use and
understand what any single line of code is going to do, preferrably
without having to look it up in a spec due to complicated rules.
 
Mark said:
C# is simple? Compared to Pascal, assembly language, or C? I guess it
depends on your frame of reference.

Personally, I've never really found C simple :)
Yes, I do believe that lack of MI leads to duplication (at least in
many cases). If you want to inherit two interfaces, and also reuse
implementation of those interfaces, you have to delegate to at least
one contained object. Delegation implies one-line methods that forward
calls to the contained object. The delegating methods duplicate the
interface (which is unavoidable), but they have to do it in two
directions, incoming and outgoing.

What you're calling duplication seems like a cleaner (de-coupled, separation
of concern) approach to me. In most circumstances, delegation should be preferred
over inheritance (I believe GoF said that). Composition through inheritance
sounds like job security to me :)

Jokes aside, even in a SI language like C#, shying away from inheritance
unless it makes sense from a specialization perspective leads to more extensible
and maintainable software.

Please correct me if I'm missing something, but is the only reason to implement
MI in a language the ability to be able to get away from objects having to
dispatch (and I'm ignoring the uber benefits of having control over the dispatch)
a call?

If that's really it, then I don't think I have anything left to add to this
argument.
It's a moot point in root-class languages like C# anyway.

Are you saying MI wouldn't make sense in C#? Well, I guess its decided then :)
 
Shawnk,

I've read those posts, and I'll have to read them again to fully understand
them.

I'm beginning to feel a little disconnected here. My "formative" years were
spent in an MI environment, where I made the transition from OOP to OOAD,
with the guidance of several extremely bright people and a lot of hard work.
So today I am rather embarrassed by many of my earlier efforts. Today I
feel OOP is a bad word, because it focusses on Programming. I focus on
analysis and design, and of course use OO concepts and pattern recognition
during those activities. For me, "programming" is simply translating design
into code - a simple mechanical task (and I have built a design/metadata
execution engine to partially avoid that step).

Summarising:
The world is full of patterns, many of which are orthogonal;
Analysis and design of real world problems is the hard job;
Coding/programming the output of the Design phase is trivial/mechanical
(given a suitable framework);
MI is a fundamental requirement of a programming language to keep the above
true.

I spent a long time with MI: 1992 - 2001. Again, those were very formative
years and I learned all about pattern recognition and design. The advantage
we had was an MI engine to power our [MI] designs. Very few people were
using OO principles for most of 90's. We embraced OO and accepted MI as a
natural instrinsic element of OO principles. I feel strongly that its a bit
like sight or hearing; if you've never had it, you don't know what you're
missing, but if you grew up with it and lose it, it is absolutely
devastating.

I'll find some time this weekend to review your other posts.

Cheers,

Radek
 
Saad Rehmani said:
Personally, I've never really found C simple :)

Do you think C# is simpler than C? Have you ever used assembly language?
Geez -- kids these days.
What you're calling duplication seems like a cleaner (de-coupled,
separation of concern) approach to me.

It can be both, of course. I admit to a strong prejudice against against
duplication. Do you agree that what I call duplication is indeed
duplication?
In most circumstances, delegation should be preferred over inheritance (I
believe GoF said that).

Then I guess they were idiots for using delegation instead of inheritance in
their book. Since they're not idiots (I corresponded with Vlissides before
he died, and he is definitely not an idiot), I doubt they made such a
sweeping statement. Or perhaps that while delegation (I think you mean
composition, btw) should be preferred, there are still occasions where both
kinds of inheritance are useful.
Jokes aside, even in a SI language like C#, shying away from inheritance
unless it makes sense from a specialization perspective leads to more
extensible and maintainable software.

Since you clearly don't have extensive MI experience, I wonder on what basis
you make this statement?
Please correct me if I'm missing something, but is the only reason to
implement MI in a language the ability to be able to get away from objects
having to dispatch (and I'm ignoring the uber benefits of having control
over the dispatch) a call?

I don't know if it's the only benefit, but the ability to automatically
compose mix-ins with other functionality is indeed its major benefit.
If that's really it, then I don't think I have anything left to add to
this argument.

That's up to you, of course.
Are you saying MI wouldn't make sense in C#? Well, I guess its decided
then :)

I'm not arguing in favor of MI in C#. What gave you a different impression?
 
BTW, saying "I'm ignoring the uber benefits of having control over the
dispatch" implies that that would not be possible if a language included
multiple inheritance.

Single inheritance means specializing behavior along only one dimension. All
others must use composition and delegation. Most of the time, classes do, in
fact, only need specialization along one dimension. When they don't,
however, the designer has to choose which dimension to model through one
language mechanism, and which to model through another.

Is a given class a Widget or an Observable? Why shouldn't the possibility
exist that it's both? And if it is truly both, how do you decide why class
to inherit from and which to delegate to? Number of methods? (Actually,
that's probably the best approach!)
 
Saad,

The following is an opinionated synopsis of 'why MI' as well
a very brief history on what happened to MI in 'modern' (ahem!)
languages such as Java and C#.

[1] The Functional Structure of a computing Expression medium (Logical
English, C, Java, etc).

SFM (Strategic Functional Migration) is the process of normalizing a code
base.

A fully (functionally) normalized code base has no 'diamond' structures within
the fabric of inheritance. Just like a fully normalized database has no
replicated data, a fully normalized code base has no replicated functionality.

Note : Our good friend Jon Skeet (God bless him :-) might suggest that the
normalization could be accomplished via the sequential effects of composition.
Normalization being the removal of replicated functionality. The
normalization
I speak of is a structural normalization of function that controls the
structural accessibility and containment of function.

Functional normalization is a structural aggregation phenomena not directly
related to compositional issues/problems/solutions.

In this context (structural aggregation of function - ie. inheritance) two
visualizations are used to understand the problem of distributing state space
along lines of inheritance. The visualizations are 'diamond' and 'fans'.

What are 'fans'.

In a normalized code base, instead of Diamonds you have 'fans' - Radial
lines of
inheritance.

In a stratified visual diagram the 'fans' are going from the top
(derived/child
classes) to the bottom (base/parent classes). This is the self-centric
viewpoint of children. Each child has its own functional 'viewpoint'
defined by
its 'lines of inheritance'. Many children, on the top, can have lines
extending
to (and thus incorporating) the parents.

In a circular visual diagram each individual base class (parent) is in the
center of its own circle 'looking out' towards all children (derived classes)
that inherit it (a self-centric subjective viewpoint of the parent class).

The functional 'computing fabric' formed by 'fans' is orthogonal in that no
lines 'touch'.

If two children have replicated function OR if the same function is desired
in more that one child then the function can be moved out from the children
to a base class and then incorporated back into the children via MI.

Note : Function ALWAYS includes operations (operators, etc) and state space
(numbers, strings, flags, etc).

Case in point. I have a child, it has a handy function I want to use it 'over
here', so I break it out, make a base class, bring into both targets via MI
and
'Wa La' I'm done :-)

Again, this is a fully normalized code base, no diamond structures.

[2] History of MI (super short version :-)

In the beginning C was 'free form' computing with an expression medium
incorporating very little restrictions. C++ introduced functional containment
whose object oriented approach allows the computing fabric to have a
'rigidity'
formed by specifications (public/private/protected - MI) which articulated a
more refined and accurate implementation of the designers orignal intentions.

Java, followed by C#, removed the primary normalization mechanism - MI - from
the structural matrix of the computing fabric. The result being that the
radial
lines of functionality (fans as defined above) where no longer structurally
possible. Note : Structurally possible as a structural aggregation phenomena.

It was suggested/proposed/championed that using sequential functional
composition
(calling a sequence of functions similar to commutativity in mathematics)
could
replace the design utility of MI. This only confused the intellectual
landscape
of the design medium by removing 'fans' and replacing them with 'call
sequences'.

In real world programming call sequences ARE more verbose (from an information
theory point of view) complex and confusing. Why? Well, its simple - no
CENTRAL SPECIFICATION MECHANISM exists to define a sequence of calls as a
functional unit (an interesting thought). This would logically cast the
'process' as an apparently dynamic phenomena without the static costs/problems
of state space.

Class is a functional unit (structural aggregation). MI defined a structural
aggregation of function to create functional units without code replication.

Call sequence is free form (compositional effects). A method in one class
does
not define a functional unit as in a process definition from a sequence of
steps
with the design result being a callable process as an architectural
expression.
(a purely dynamic functional unit by the way).

The functional linkage of call structures is unique to each code base. It has
no ability to force the structural design of another code base. To state that
the use of a call sequence in runtime (as a compositional device in time) is
suddenly a structural design phenomena in space is a circular argument that
misses the point. A sequent of component activity is a valid, complementary,
AND FUNDAMENTALLY DIFFERENT phenomena from structural aggregation. Wheather
such a sequent can be architectually defined and reused is orthogonal and
immaterial to the complementary nature of compositon/aggregation.

Replacing structural aggregation with compositional effects is like saying
a business needs 'three people', an engineer, an accountant and a salesman.
Our company (ie. my design) will use two engineers and an accountant and we'll
be good to go!!! (no cash flow today :-)

During the C++/MI to C#/SI-Interface period the 'sequence in time' va
'aggregation in space' approaches were explored, understood and formally
defined
(as in WikiPedia).

During that time NO LANDMARK logical/metric analysis was performed in the
programming community to define functional normalization, articulate
structural
aggregation and compare the complexities of sequential composition in time to
the structural aggregation in space (Landmark being - oh yeah, now we ALL
understand MI).

The result being that a generation of programmers was indoctrinated into a
design approach that lacked a clear understanding of functional normalization
and the concept of functional orthogonality within a code base.

The inherent balance of composition/aggregation has also proven elusive since
many 'so called experts' can not see beyond the diamond problem to a computing
universe wherein functional normalization is the 'stable state' of the
architectural expressions.

MI proponents believe that 'forced code replication' is an inherent result of
not being able to fully normalize a code base.

At present, and within the cultural context of the programming community,
the degradation of the structural computing fabric due to lack of MI
is a 'de-evolution' towards the problems in the C language without the
necessary freedom/power (due to Java/C# type restrictions) to overcome
those problems.

Fast forward to today.

[3] Summary of a very opinionated dissertation

No body has time anymore....

Since we all have schedules, dead lines and prior commitments many talented
people have been unable (practically) to prove the MI/SI-Interface debate
from a
scientific-inquiry/terminology-metric point of view. If this was done Jon,
Radek, I and others would have enumerated the top three metrics (numerics)
proving our logical positions.

Fortunately forums such as this can funnel ideas to WikiPedia where
the foundations for such inquiries can have a solid consensus that
leverages the best of the communities efforts in advancing the state
of our art (computing).

If you understand database normalization and data replication you
will have a fundamental understanding of functional normalization
and code replication.

This community dialog on SFM (Strategic Functional Migration) has been quite
helpful in getting excellent input (Radek Cerny - functional normalization) to
better articulate the differences between composition and aggregation.

I hope the above is helpful to you (and hopefully others) in coming to a
better
understanding of the utility of MI and its use as a structural aggregant for
the
functional normalization of an architectural expression.

Thank you so much for your thoughtful questions and comments.

Shawnk

PS. The 'strategic' in SFM emphasizes the refactoring of a code base to
remove
all diamond structures or to change an existing normalized code base and still
retain a normalized state.

PPS. Please do not interpret any of this (the history) as a pejorative polemic
to pound MI into the brains of SI-Interface proponents. This is an attempt to
refine the articulation of the MI/SI-Interface debate. The hope being to move
our various opinions towards scientific inquiry and logical/metric analysis.

PPPS. The greatest cost/work of SFM in terms of intellectual energy is the
refactoring of any 'diamonds' into a set of 'fan downs' and 'fan ups'. In 14
years of C++ coding I never had a 'diamond' except once and I just factored it
out. I did have the luxury of doing my own architecture and design however.

To replicate the SFM process consistently in a corporate code base
with millions of lines of code from thousands of programmers is another
matter. Thus the need for a more stringent approach via a formal
analysis.

PPPPS. Also I have no problem with allowing diamonds to exist since (IMHO)
the permutations of state space collisions are well known. I would
allow all permutations to exist (architecturally) for the programmer,
choose a syntax spec default and allow a syntax mechansim to change
the default both globally and specifically.

This allows the freedom of non-normalized functionally while retaining
the precise articulation nessary to reflect the intentions of the designer
in the expression medium.
 
Mark said:
Do you think C# is simpler than C? Have you ever used assembly
language? Geez -- kids these days.

*sigh* ... old people ... :)
It can be both, of course. I admit to a strong prejudice against
against duplication. Do you agree that what I call duplication is
indeed duplication?

Then I guess they were idiots for using delegation instead of
inheritance in their book. Since they're not idiots (I corresponded
with Vlissides before he died, and he is definitely not an idiot), I
doubt they made such a sweeping statement. Or perhaps that while
delegation (I think you mean composition, btw) should be preferred,
there are still occasions where both kinds of inheritance are useful.

Okay, now you're just getting pissy. Anyways, since this isn't comp.object,
i'll try to be civil :)

page 20:
"This leads to our second principle of object-oriented design:
Favor object composition over class inheritance."

While you're at it, you might want to re-read the part where they suggest
inheriting from only abstract classes (i.e., Interfaces?) since they provide
little or no implementation.
Since you clearly don't have extensive MI experience, I wonder on what
basis you make this statement?

I prefaced it with 'even in an SI language' since I obviously dont have experience
with an MI language ... read what I write! :)
I don't know if it's the only benefit, but the ability to
automatically compose mix-ins with other functionality is indeed its
major benefit.

By the way, have you considered AOP? I have pretty extensive experience with
that and personally found it quite lacking. Given your MI background, you
might find it more useful.
I'm not arguing in favor of MI in C#. What gave you a different
impression?

Errr ... this is a C# newsgroup? ;
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top