Strategic Functional Migration and Multiple Inheritance

  • Thread starter Thread starter Guest
  • Start date Start date
As someone who used MI (a mistake) in an MFC project, I can assure you there are
no benefits.

eg if you have class C derived from A and B, but then you want to add a new
class A1 derived from A that adds extra functionality and a new class B1, then
class C is screwed up and it all gets very messy.
 
Ian Semmel said:
As someone who used MI (a mistake) in an MFC project, I can assure you
there are no benefits.

eg if you have class C derived from A and B, but then you want to add a
new class A1 derived from A that adds extra functionality and a new class
B1, then class C is screwed up and it all gets very messy.

In that scenario, C would not be affected, so there must have been more to
it than that.

I must say that I've found MI most useful in mixin situations, where the
classes I'm deriving from are orthogonal. The "SeaPlane derives from both
Plane and Boat" idea is not a good use of MI.
 
Everything can be implemented through ones and zeros.

I think you're on the wrong list ... ;)
There is nothing
-- pattern or application -- that requires multiple inheritance,
single inheritance, or even a programming language. People with
experience in MI, however, feel they are more productive with its
judicious use.

Observer is one pattern that the GoF implement using inheritance, so
obviously if you also needed to inherit from something else, you'd
want multiple inheritance.

I looked up the observer pattern in GoF. The two implementations I'm referencing
are the one that uses MI in GoF (DigitalClock) and the SI version at http://www.dofactory.com/Patterns/PatternObserver.aspx

The MI version (i think i'm about to diss GoF, kill me now! ;]) seems to
lack division of responsibility. The DigitalClock doesn't really specialize
the behavior of the Widget its inheriting from, what its really doing is
extracting data from the subject prior to calling Draw(). In ASP.NET terms,
this seems like more of a PreRender() task than part of the Render(). In
more generic terms, it seems like a data binding operation is being performed
inside the specialized Draw() to overcome for the widget not allowing us
a way to plugin a different datasource.

If we were to take that away, we'd be left with a pluggable subject and pluggable
datasource, which brings me to the SI version.

Assuming that widget doesn't allow us to plugin a datasource, the SI version
would end up with extra code to adapt the Observer to set the state on various
types of widgets.
But there's nothing that can be done with inheritance (forget multiple
inheritance) that can't be done with composition and delegation. Or
ones and zeros.

Does that answer your question? Frankly, I think you've already
answered it to your own satisfaction.

If this how multiple inheritance is sold to non-believers, I can see why
it hasn't gained any traction :)

Feel free to remind me how lost I am :)
 
Ian said:
As someone who used MI (a mistake) in an MFC project, I can assure you
there are no benefits.

eg if you have class C derived from A and B, but then you want to add
a new class A1 derived from A that adds extra functionality and a new
class B1, then class C is screwed up and it all gets very messy.

Are you talking about the Diamond problem (http://en.wikipedia.org/wiki/Diamond_problem)
....?
 
Saad Rehmani said:
If we were to take that away, we'd be left with a pluggable subject and
pluggable datasource, which brings me to the SI version.

The single inheritance version (as do all single-inheritance replacements of
multiple inheritance implementations I've seen) replaces inheritance with
delegation. I don't like that approach because it means duplication, and I
hate duplication. If you change the delegated-to object's interface, you
have to do the same to the delegator's interface (usually). It's not a big
deal, and lots of VB, C# and Java programmers are perfectly comfortable with
it, but I prefer to let MI do the work for me.

MI has given lots of programmers problems (there's another poster on this
thread who doesn't even seem to be quite sure what his problems were), and
simplicity is good, too.

MI has a serious problem in the realm of Java and C# in that everything
descends from a single base class. Does a multiply-derived object have two
Object bases or one? C++ solved that, but still some folks had trouble with
it.

My point, however, is that four of the most influential software thinkers
ever were quite happy with MI. Five, if you count me. :) That should mean
something.
If this how multiple inheritance is sold to non-believers, I can see why
it hasn't gained any traction :)

I never try to sell things to non-believers.

///ark
 
Saad Rehmani said:
Are you talking about the Diamond problem
(http://en.wikipedia.org/wiki/Diamond_problem)

He might be, but his description doesn't indicate as much. The "deadly
diamond of death" would still be a problem with C (unless he used virtual
inheritance) long before introducing A1 and B1. The hiearchy described is

A --|
|------ C
B --|

Changing that to

A --|------ A1
|------ C
B --| ------ B1

clearly doesn't affect C.

(BTW, I think that article misses the main problem with the diamond, which I
described in another message.)

///ark
 
Mark said:
He might be, but his description doesn't indicate as much. The "deadly
diamond of death" would still be a problem with C (unless he used virtual
inheritance) long before introducing A1 and B1. The hiearchy described is

A --|
|------ C
B --|

Changing that to

A --|------ A1
|------ C
B --| ------ B1

clearly doesn't affect C.

(BTW, I think that article misses the main problem with the diamond, which I
described in another message.)

///ark

Unless I need a C1 which inherits from A1, B1 and C
 
Saad Rehmani wrote:

Assuming that widget doesn't allow us to plugin a datasource, the SI
version would end up with extra code to adapt the Observer to set the
state on various types of widgets.

Assuming its not sealed, one could just subclass it to introduce delegation
and get back to the cleaner version.

I wake up slowly on Mondays :)

<snip>
 
Ian Semmel said:
Unless I need a C1 which inherits from A1, B1 and C

Sure, but that wasn't mentioned as the problem under discussion.

Your scenario could still be handled with virtual inheritance, that would
leave C1 with just one A, B, A1, B1, and C base class object. The problem,
of course, is if the A1 derivation isn't under your control, and it wasn't
made virtual.
 
Jon,

Sorry for the slow response. Took off a few days (long weekend) to enjoy the
summer.. Congradulations on the twins :-)

In answer to some of your excellent points..
I'm afraid I really don't see the relation between the two - it's like
claiming that object orientation is fundamentally linked to Turing
machines, IMO.

In a non-single CPU design an argument (against MI, etc) can be
made by saying each class would have its own processor. The
intent (of the statement) was to focus the context (of MI use)
and re-enforce the 'scientific metric' point about physical phenomena
in the context of a 'Turing machine'.
it's like claiming that object orientation is fundamentally linked to Turing
machines, IMO.

Exactly. I accept that you may not agree. No problem.
I found your analogy a bit arrogant -

Cringe. Those who don't know me tend to see that while those that do
see me as having a casual 'matter of fact' attitude coupled with a
strong passion for truth, justice and the American way :-)
(... I just could not resist. PS. American way is to question
your superiors .. in this case Anders' approach to language expression
in C#).
I rather suspect that people such as Anders *have* done their homework...
room for useful dialog.

God forbid! I've learned a lot from this forum and our dialog should
(hopefully)
be refined and focused to core points on fundamental expression constructs
in C# and ultimately all computer languages. Our mutual desire for
truth and understanding is intrinsic to 'men of science' personalities
such as engineers, programmers, scientists etc.
For what it's worth, my main beef with MI is that however you choose to
solve the "diamond of death", you've added complexity.

I qualified the 'diamond of death' issue as an 'under the cover' issue to
decouple (for these posts and thread) what the compiler generates from what
the
programmer writes.

Thus we come to a THE point of this thread which is 'complexity' of the
language
as it is written.

Complexity being measured (a la information theory) as a simple character
count
of the MI/SI expression alternatives. Again -- the focus on language
expression
is vital (to this discussion) because this thread would be to big if we
include
under the cover issues such as diamond of death. My intent is to get
information from this thread/series of posts to estimate/calculate these
comparative metrics of language expression - not metrics comparisons of
compiler generation.
Whenever you make code harder to read, the upside has to be big. Considering
two or more base classes *is* a more complex situation, naturally making the
thought processes harder, IMO. ....
And I would normally favour composition/aggregation in such cases,
rather than repeated code.

This is exactly what I was searching for in this thread. A clear
concise and focused articulation of the conceptual comparison of
MI and proposed alternatives. (From someone who I respect :-)

[1] Composition
[2] Aggregation

In [2], functional agregation can be accomplished via MI or by embedding a
class
within a class. Both (MI, embedding) are agregation mechanisms. To clarify
your meaning - Did you mean - for example ...-

In creating a functional unit (class) Fu (in C#) that needs functionality from
classes Fx, Fy, Fz (three classes) you would inherit from either Fx, Fy or Fz
and embed the other two classes?

Note that Fx, Fy, and Fz are inherent (complete,
closed, concise) and functionally orthogonal in nature. Thus (for this
example)
you do not want to re-compose Fx, Fy, Fz (sin, cosine and other trig functions
would be an example, but it applies to all computing functionality).

Please let me know if I understand your design approach (regarding [2]
aggregation)
correctly.

Also, given the 'embed' approach how would [1] composition be used the add,
take
away or bring out the target functionality of Fx, Fy, Fz. (Note the example
criterion of non-restructuring of the Fx, Fy and Fz functionality.)

I want to thank you, again, for your response. Getting a clearly articulated
response from some one of your caliber will help (hopefully) many of us MI
proponents to better understand your points regarding the MI alternatives of
[1]
and [2].

My hope that much of the work done by those responding to these posts (on MI
in
C#) will find its way to WikiPedia in the form of a definitive set of
scientific
metrics, terminology and concepts regarding functional aggregation
mechanisms in
Von Neumann machines. Hopefully this will also help the C#/.NET community
as well in a better understanding of expression alternatives for functional
aggregation.

Shawnk

PS. I'll have to look into the Groovy langage you mentioned.
 
Radek,

Your comment...
My architecture leads to a complete normalisation of one's code base.
Everyone normalises their database design, but repeats code without
flinching.

... makes an excellent point. The use of 'normalisation' is a key term I was
hoping to find to better express the aggregation and composition effects
of MI.

In the future I'll try to develop the term 'functional normalisation' to refer
to aggregating a compositional result (as in Fx, Fy, Fz being three
functionally orthogonal classes in OOP) into one functional unit (Fu created
by MI from Fx, Fy, and Fz).

Thanks for articulating the normalized state of function (in the final
design) many of us MI proponents unconciously infer.

Shawnk
 
Jon,

As always, your articulate and thoughtful points are always appreciated.
And the flaw in that logic is the assumption that MI is a requirement for not
repeating code.

That is exactly my point (an hopefully the point of most MI proponents).

Some question as to (1) requirement Vs. (2) best solution could be made
and we should be clarify this (1) vs (2) issue before continuing.

Radek's excellent articulation of 'functional normalization' is perhaps the
most
succinct term to embody the architectural intention of MI. Which is, a
minimal
expression to aggregate class functionality while still keeping the
pre-existing
(to the aggregation) intrinsic functional composition (that classes provide)
intact.

Herein 'functional composition' is intended to mean the ability to enforce
'orthogonality' via placing some function in a given class. Each class
expressing a 'dimension' of functionality (as it were).

The resulting structural (not behavioral) separation allows the functionality
(classes) to be combined, via MI, in what I (and hopefully other MI
proponents)
know to be metrically provable minimal expression.

So, your point of...
And the flaw in that logic is the assumption that MI is a requirement for not
repeating code.

... is in fact our mutual question. IS MI an intrinsic (1) requirement because
it provides the metrically proven (2) best solution?

If (2) then (1) would be the synopsis. If (2) MI is the best solution
for both (A) functional normalization (excellent term from Radek) and
(B) functional agregation then (1) MI is a requirement for not
repeating code.

Note that the functional normalization comes as an inherent effect
of classes (OOP). Given that any example design is functionally
orthogonal and normalized in the class architecture we then
need a mechanism to aggregate them.

MI provides the only mechanism to functionally aggregate classes
AND pass that functionality through to the surface API of the aggregate
functional unit.

Shawnk

PS. I note in passing I have issues with the term 'composition' but do not
want
to digress from a focus on 'functional normalization'. Basically in runtime
'behaviour' composition refers to the effect (in time) of several functions
'run' to product a particular effect (result). MI is a structural aggregation
mechanism that is semantically orthogonal to the runtime effect of
composition.
The structural vs behavioral aspect is clearly demonstrated by limiting
discussion
to the structural aggregation effect of a MI vs a SI/Interface approach to
expression mechanisms.
 
Saad,

Sorry for my slow response. I had a long weekend :-)
Is there a simple example that can underline the realized benefits of using
multiple inheritance?

There are some examples but I have tried to focus this thread by ignoring
the following -

[1] Syntax
[2] Composition (calling a sequence of functions to produce a 'aggregate
effect' of functionality).
[3] Under the cover 'implementation issues' dealing with what the compiler
generates (diamond problem, etc).

The intent of the focus in this thread is to discuss SFM (Strategic
Functional Migration) as a means
to find some better terminology related to the MI/SI-Interface debate.

A key contribution (in this thread) from Redek Cerny in his concept of
'functional normalization' which
is a core intent of MI.

Any presentation of syntax should be preceded and followed by a articulation
of the architectural
features of interest (the key intellectual points of the design).

Jon Skeets excellent comments
regarding the use of [1] Composition and [2] Agregation are an excellent
example of the ineffectiveness
of our current terminology (in OOP) to distinguish the phenomena of
'functional normalization' from
a structural agregation perspective.

The communitive, associative and distributive aspects of mathematical
effects is the basis for
the communitive aspect of [1] Composition. The use of calling several
methods/functions to produce
the same computing result as a single method/function is an approach to
replicate/replace/mimic MI.

Rather than presenting a syntactic example of these communitive effects of
composition
I would prefer (being very busy, etc) to provide a link
to WikiPedia that contains a definitive syntactic example -

http://en.wikipedia.org/wiki/Composition
http://en.wikipedia.org/wiki/Composition_(computer_science)

In a similar manner my presentation of a 'hip shot' example
regarding -

Example of 'Strategic Functional Migration' : SFM

did not present syntax to focus on the terminology and concepts
at a more abstract (and fundamental) level.

Please excuse me and bear with us in this thread as my key concern
(besides a very informal feel for SFM understanding) is to articulate
the design features, relative to MI/SI-Interaface options, of various
agregation mechanisms in C# (and ultimately all computer languages).

Having read your discussion with Mark WIlden he did make a key point
that is very relevant to SFM/MI/SI-Interface and this thread -
My point, however, is that four of the most influential software thinkers
ever were quite happy with MI. Five, if you count me. :) That should mean
something.

I would have been more specific with the point to say ...

The ability of MI (Multiple Inheritance) to provide 'structural agregation'
in a 'functionally normalized' code base is a requirement for non-repeated
code fragments.

Another way of saying the same point ---

[4] The C# language and .NET can not support a 'functionally normalized'
code base.

This is a serious and fundamental flaw with .NET but not a fatal one (long
term).

If [4] is objectively true and can be metrically proven (within the context
of [1],[2], [3] above)
then .NET becomes another 'sandbox' similar to 'sandbox' concept introduced
in the Java/Web client
architectures.

To articulate the issue of SFM and a functionally normalized code base is
the core issue of this thread.

Radek Cerny's contribution of the idea of 'functional normalization' is an
concept that could be put
on WikiPedia (in time) and thus contain the syntactic examples you (and many
of us) desire.

Thank you for your input and I hope this articulation of 'MI/Structural
agregation/functional normalization'
has been helpful to you to better understand the key concern of MI
proponents with C# (and ultimately .NET).

Please feel free to comment on any concept herein that you feel is 'off the
mark' of the MI/SI-Interface debate.

Shawnk
 
Mark,

Just a note in passing.

Redek's excellent comment on [0] 'functional normalization' is a concise
term to
articulate the 'realized benefit' of MI relative providing a structural
aggregation mechanism while still retaining the functional normalization of
the
orignal code base.

Ian Semmal's problems (valid or not) are indicative of the use of the
'strategic' connotation in the term 'Strategic Functional Migration'.

If we are given a non-functionally-normalized code base there is a cost
associated with refactoring/restructuring it using MI.

If we ignore the obvious port issue of C# to C++ ...

and focus on Ian Semmals example ...

The use of [1] Stratification in the [2] Inheritance Hierarchy coupled with
[3] MI
would solve all the problems (messy - meaning, I guess, non-normalized)
presented
to Ian (with all due respect Ian :-).

The [4] inability to normalize the code coupled with [5] an inability to
aggregate
the code are two separate problems which are not articulated (enumerated) in
Ian's discussion.

His example is quite helpful however in showing the utility of stratification
in providing 'inheritance threads' that would result in the exact
functionality
desired (assuming the functionality is well understood and can be normalized).

Your excellent comments the use of [1] Stratification to create [5] the target
functionality while maintaining [6] a functionally normalized code structure
enumerates the design approach quite well.

I note that concepts [0] and [1] help to distinguish concepts [4] and [5].

I propose that it may be possible to articulate a [8] formalized step wise
design
approach to [0] functional normalization via [1] Stratification and [3] MI.

The intent is to express the [0] 'functional normalization' as [9]
'functionally
orthogonal threads' of inheritance based on a [3] MI expression mechanism.

Just a passing thought :-)

Shawnk

PS. Thanks so much for your focused comment on the functional effects
of inheritance in your response to Ian Semmal's design problem.
 
Shawnk said:
Saad,

Sorry for my slow response. I had a long weekend :-)

Congratulations on the weekend. I didn't really have one, hehe :)

I've tried to respond to some of your questions. Please bear with me as I
am treating this as a learning experience.
Is there a simple example that can underline the realized benefits of
using multiple inheritance?
There are some examples but I have tried to focus this thread by
ignoring the following -

[1] Syntax
[2] Composition (calling a sequence of functions to produce a
'aggregate
effect' of functionality).

In general, composition allows more separation of concerns. Without componentizing,
we'd all be duplicating code. Over the last five or so years, I've become
a fan of lightweight component based models. The granularity of the calls
(SOA is an example) is really a design choice. Different strokes for different
folks ... err, contexts :)
[3] Under the cover 'implementation issues' dealing with what the
compiler
generates (diamond problem, etc).

Only concentrating on the strengths of an MI implementation would lead people
to make ill informed decisions. I did some reading up on how C++ deals with
the diamond problem. If C# were to actually implement MI, would that approach
work for you? I haven't really worked with MI so I don't really know if that
solution has any issues or not.
The intent of the focus in this thread is to discuss SFM (Strategic
Functional Migration) as a means
to find some better terminology related to the MI/SI-Interface debate.
A key contribution (in this thread) from Redek Cerny in his concept of
'functional normalization' which
is a core intent of MI.
Any presentation of syntax should be preceded and followed by a
articulation
of the architectural
features of interest (the key intellectual points of the design).
Jon Skeets excellent comments
regarding the use of [1] Composition and [2] Agregation are an
excellent
example of the ineffectiveness
of our current terminology (in OOP) to distinguish the phenomena of
'functional normalization' from
a structural agregation perspective.
The communitive, associative and distributive aspects of mathematical
effects is the basis for
the communitive aspect of [1] Composition. The use of calling several
methods/functions to produce
the same computing result as a single method/function is an approach
to
replicate/replace/mimic MI.
Rather than presenting a syntactic example of these communitive
effects of
composition
I would prefer (being very busy, etc) to provide a link
to WikiPedia that contains a definitive syntactic example -
http://en.wikipedia.org/wiki/Composition
http://en.wikipedia.org/wiki/Composition_(computer_science)
In a similar manner my presentation of a 'hip shot' example regarding
-

Example of 'Strategic Functional Migration' : SFM

did not present syntax to focus on the terminology and concepts at a
more abstract (and fundamental) level.

Please excuse me and bear with us in this thread as my key concern
(besides a very informal feel for SFM understanding) is to articulate
the design features, relative to MI/SI-Interaface options, of various
agregation mechanisms in C# (and ultimately all computer languages).

Having read your discussion with Mark WIlden he did make a key point
that is very relevant to SFM/MI/SI-Interface and this thread -
My point, however, is that four of the most influential software
thinkers ever were quite happy with MI. Five, if you count me. :)
That should mean something.
I would have been more specific with the point to say ...

The ability of MI (Multiple Inheritance) to provide 'structural
agregation' in a 'functionally normalized' code base is a requirement
for non-repeated code fragments.

Based on earlier posts in this thread (and I believe Mark agreed, he just
found the MI version more elegant in the end), it seems that most of the
code can be normalized if a delegation of concerns approach is used.

If I'm using the wrong word, please look at the code posted earlier.The only
part that couldn't be normalized was any adaption and that would only have
to happen once.
Another way of saying the same point ---

[4] The C# language and .NET can not support a 'functionally
normalized' code base.

This is a serious and fundamental flaw with .NET but not a fatal one
(long term).

If [4] is objectively true and can be metrically proven (within the
context
of [1],[2], [3] above)
then .NET becomes another 'sandbox' similar to 'sandbox' concept
introduced
in the Java/Web client
architectures.
To articulate the issue of SFM and a functionally normalized code base
is the core issue of this thread.

Radek Cerny's contribution of the idea of 'functional normalization'
is an
concept that could be put
on WikiPedia (in time) and thus contain the syntactic examples you
(and many
of us) desire.
Thank you for your input and I hope this articulation of
'MI/Structural
agregation/functional normalization'
has been helpful to you to better understand the key concern of MI
proponents with C# (and ultimately .NET).
Please feel free to comment on any concept herein that you feel is
'off the mark' of the MI/SI-Interface debate.

Shawnk
 
[3] Under the cover 'implementation issues' dealing with what the compiler
generates (diamond problem, etc).

<snip>

Just one thing: how you deal with the diamond of death is *not* an
implementation issue - it's a semantics issue. You need to define what
the semantics should be. I don't care how the compiler then implements
the spec, but I would care very much what the spec is.
 
Shawnk said:
That is exactly my point (an hopefully the point of most MI proponents).

It's being treated as a given, however - which I don't grant, as I
don't repeat code despite not using MI.
Some question as to (1) requirement Vs. (2) best solution could be made
and we should be clarify this (1) vs (2) issue before continuing.

Even if MI were the best solution, it is not required while it is not
the *only* solution.
Radek's excellent articulation of 'functional normalization' is
perhaps the most succinct term to embody the architectural intention
of MI. Which is, a minimal expression to aggregate class
functionality while still keeping the pre-existing (to the
aggregation) intrinsic functional composition (that classes provide)
intact.

Except that you get not just the aggregate but potentially the
interference. I'm already not a fan of overuse of inheritance - see

http://msmvps.com/jon.skeet/archive/2006/03/04/inheritancetax.aspx

for more on this. I *generally* prefer composition to inheritance, even
without considering MI. MI just makes the same problem more complex,
IMO.
So, your point of...


.. is in fact our mutual question. IS MI an intrinsic (1) requirement because
it provides the metrically proven (2) best solution?

If (2) then (1) would be the synopsis. If (2) MI is the best solution
for both (A) functional normalization (excellent term from Radek) and
(B) functional agregation then (1) MI is a requirement for not
repeating code.

No, that doesn't prove that it's a requirement, even if it's the best
solution. Something being the best solution doesn't mean that it's the
*only* solution. If another solution meets the requirement of not
repeating code, then MI itself is *not* a requirement for not repeating
code.
Note that the functional normalization comes as an inherent effect
of classes (OOP). Given that any example design is functionally
orthogonal and normalized in the class architecture we then
need a mechanism to aggregate them.

MI provides the only mechanism to functionally aggregate classes
AND pass that functionality through to the surface API of the aggregate
functional unit.

I prefer implementing multiple interfaces, and delegating the
implementation of each interface member to a separate implementation,
perhaps changing some of the implementations. Are there issues with
that approach? Yes, it's not always perfect. Does it avoid some of the
problems with inheritance? Absolutely. Could it be better supported in
languages? Yup. Does it provide a way of avoiding repeating code but
allowing an API which supports multiple interfaces? Absolutely.
 
Shawnk said:
In answer to some of your excellent points..

I'll only answer a few of your answers, again due to time I'm afraid.
In a non-single CPU design an argument (against MI, etc) can be
made by saying each class would have its own processor. The
intent (of the statement) was to focus the context (of MI use)
and re-enforce the 'scientific metric' point about physical phenomena
in the context of a 'Turing machine'.

That seems like a very odd analogy, I'm afraid - I can't see how it
provides any benefit.
Cringe. Those who don't know me tend to see that while those that do
see me as having a casual 'matter of fact' attitude coupled with a
strong passion for truth, justice and the American way :-)
(... I just could not resist. PS. American way is to question
your superiors .. in this case Anders' approach to language expression
in C#).

A matter of fact attitude is fine when we're talking about matters of
fact. I'm often viewed as arrogant on those issues as well - issues
where one can point to a spec and *prove* correctness. This is a matter
of *opinion* however, which is very different.
I qualified the 'diamond of death' issue as an 'under the cover' issue to
decouple (for these posts and thread) what the compiler generates from what
the programmer writes.

As I've said elsewhere, it's *not* an implementation issue. The effect
would have to be covered by the language specification.
Complexity being measured (a la information theory) as a simple character
count of the MI/SI expression alternatives.

I have to say, I think that's a very silly way of measuring complexity.
Would C# be a "simpler" language by making all the keywords single
letters? I don't believe so. It *certainly* wouldn't be a more readable
language.

The complexity I'm talking about is how hard it is to think about the
object model, and I believe that MI makes the model more potentially
complicated.
Whenever you make code harder to read, the upside has to be big. Considering
two or more base classes *is* a more complex situation, naturally making the
thought processes harder, IMO. ...
And I would normally favour composition/aggregation in such cases,
rather than repeated code.

This is exactly what I was searching for in this thread. A clear
concise and focused articulation of the conceptual comparison of
MI and proposed alternatives. (From someone who I respect :-)

[1] Composition
[2] Aggregation

In [2], functional agregation can be accomplished via MI or by embedding a
class
within a class. Both (MI, embedding) are agregation mechanisms. To clarify
your meaning - Did you mean - for example ...-

In creating a functional unit (class) Fu (in C#) that needs functionality from
classes Fx, Fy, Fz (three classes) you would inherit from either Fx, Fy or Fz
and embed the other two classes?

No. I would implement Ix, Iy and Iz by delegating each of the
implementations to an instance of each of Fx, Fy and Fz. Those
instances may be created by the class or specified externally (giving
flexibility). When writing Java in Eclipse, this is even supported very
simply by the IDE itself. It would be nice if there was a way of
expressing it in the language itself, but that wouldn't be equivalent
to MI.
 
Saad,
Based on earlier posts in this thread (and I believe Mark agreed, he just
found the MI version more elegant in the end), it seems that most of the
code can be normalized if a delegation of concerns approach is used.

The phrase 'more elegant' I interpret to mean -

[1] Least character count in the aggregation expression.
[2] Maximum usable function within the body of the target class (created via
MI).
[3] Maximum usable function exposed to client users of the target class.

Jon Skeet's excellent point (about using the composition/aggregation mechanism
of Interface Inheritance) do not address the lack of [3] client exposure to
functionality provided by [4] embedding classes within the target class.

Hopefully Jon will provide us with a metric/construct (any metric/construct)
that can quantify/qualify the 'elegant' semantic inferred by Mark and
brilliantly
articulated by Redek qualification of 'functional normalization'.

In Jon's proposed design approach the increased character count needed to
'bring
out' the embedded class functionality to the surface API of a C# based,
embedded
aggregation approach is indicative of an 'in elegant' expression (IMHO).

That's (in elegant) an understatement actually - C# structural aggregation
mechanism's are laughable (presenting Multiple Interface Inheritance (MII)
as a
structural agregation mechanism) - except for the very real under the cover
issues that must be addressed in a multi-language sandbox like .NET.

Anders is justifiably concerned with these very important under the cover
issues.

Also, we are fortunate the Jon has taken the time to give us some excellent
points
to discuss (composition/aggregation) regarding the MI/SI-Interface issues.

I do want to note that, now that I think about it, there is an uncanny
similarity
between [5] replicated data in a non-normalized data base and [6] replicated
code in a
non-normalized code base (functionally normalized that is). The concept of
[7] normalization
is a very fundamental concept that could be formally related to metrics that
can prove the quality of normalized vs non-normalized code text (the language
expression - not the compiler generated implementation).

If this potential is realized a metric comparison between
expression/implementation features of Unix Vs .NET could be made.

Hopefully the articulation of 'functional normalization' and the fact that
..NET
may (jury still out on this) not support a 'functionally normalized' code base
will be helpful to better understanding SFM (Strategic Functional Migration)
in
the context of a .NET vs a non-.NET implementation (for large scale code bases
with thousands/millions lines of code).

Thank you for pointing out the 'elegant' focus of Mark's MI comment in an
'above
cover' context that focuses purely on the expressive capabilities and features
of a language (as opposed to a quality/lack-of-quality of a particular
implementation generated by a particular compiler).

Shawnk
 
Jon,
...it's a semantics issue...

I think there are both semantic and implementation aspects to MI.

That would be a fourth issue to ignore, that is the semantic meaning of MI
in the context
of several potential inclusion candidates.....

Just kidding (could not resist).

I do think selection of multiple candidates in a MI context is a solvable
problem (in theory) that does not present
a signifiant problem to MI.

The semantics of candidate selection (in practice) has a significant political
engineering aspect (language standard) however.

Since we are not hindered by these politics we should be able to solve the
semantic issue. I will review the semantic solutions to the diamond problem
and try to articulate the solution in a separate post dealing with the
semantic meaning of MI in the context of multiple candidates (for target
inclusion).

As a pre-cursor to that post it would behove us to identify the problem
(within
this thread) for introduction of a solution (in another thread).

Do you have any take on the various approaches to the semantics of candidate
selection? (in the context of MI). Or do you think that only one simple
problem/context exists?

Better yet do you have an enumerated set of categories that clearly
distinguishes
the different contexts of meaning that give rise to multiple candidates in
the first
place? (problem definition)

The context categories would be inclusive (I think) of both under cover
selection
and above cover semantics (from my point of view). But once noted, the above
cover semantics should be
fairly easy to enumerate and solve.

The under cover implementation aspects would be a little more involved and
best looked
at after a clearly articulated problem/solution discussion.

Thanks as always for your input and thoughtful comments.

Shawnk

PS. Redek Cerny's contribution regarding functional normalization could be a
significant
architectural impetus to guide the breakdown of the semantic context into a
good
set of categories (for problem articulation).

PPS. Perhaps Redek has a comment regarding the semantic context of multiple
candidate selection?

PPPS. 'Multiple candidate' infers one or more potential classes to select
from (what I call above the cover). 'Which state space' to select is the
phrase I use to enumerate the problem of how state space is distributed (or
not) in the final functional unit. 'State space conflict' is another term I
use for what I consider to be an implementation choice. Your inference that
multiple state space selection/distribution/choices are essentially semantic
in nature is noted (and deeply appreciated) as a valid point and argument.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top