multiple inheritance

G

Guest

Joanna,

", but then I am more interested in real
Just so I'm not mis-interpreted the newsgroup/engineering community can be
partitioned as;

1. Pragmatic (real world only thank you very much :)
2. Envelope (real world and theory)
3. Applied research (theory - experimental market solutions)
4. Pure research (University - military contractor - etc)

No group is 'better' than the other. Most everyone has a preference.

I'm group two and (I take it) your center point is group one.

I'm always thankful to have group one and three types around me on a team.
Group ones are quicker on draw than I and group threes are great for code
generators and metric anlyzers (DALs, etc).

I just wanted to add that my previous post point was my appreciation to the
Micrsoft Newsgroup team (and to MS as a company) to provide a forum (this
newsgroup) where we can occasionally meet and get/give ideas. The range of
newsgroup (via any given post) contributions is what makes this venue a real
world life saver (in terms of time and money).

I have had my rear end (pardon the expression) saved multiple time by the
more pragmatic members of your community (such as yourself) and I just wanted
to clearly note my thanks to you personally and more importantly to others
for tons of answers to 'newbie questions' for 'senior engineers'.

Thanks again for your input.

Shawnk
 
G

Guest

I think it's so silly!

I have to apologize for not answering your original post with a more
direct point on MI vs Interface discussions (and 'why not in C#').

I'm a MI AND Interface fan.

IMHO - The functional forte of each (MI and Interface) is;

1. MI - Functional aggregation
2. Interface - Functional decomposition

Point 1 : Aggregation

To 'bring together' a set of functions (F1->Fx - base classes) at a focal
point (the derived class) I prefer MI first followed up by Interface
implementation if I need polymorphic specialization.

So for aggregation I actually use both. For code reuse however you just
can't beat MI. This is the point of resource based
issues/concerns/arguments.

To aggregate pre-existing computing function MI can be PROVED to be the
best.

With Microsoft's budget and cultural drive Ander's team MUST already have
a theoretical analyst that codes metric analyzers to quantify and verify
the dev team's suspicions. I"m sure they already ran the numbers to have
proved point 1.

Point 2 : Decomposition

Once we've aggregated on a focal point we need to 'Cube' the
functionality.

Passing Note : 'Cubing the problem space' is engineering slang for the
following paradigm. Suspend the focal point in a hypothetical cube of six
facets, each facet an 'interface' to present 6 functionally orthogonal
computing units. Of course '6' is a contrived number used for the sake of
a clear visual diagram. The idea is you can 'decompose' the focal point
functions as a pre-existing Fn (functional pass through via MI) or create
a new Fn as an adaptation to a pre-existing functional interface spec.

Perfect focal points are all pass through (via MI). A theoretical dream
and a pragmatic nightmare (with today's technology). (Hat's off to
Joanna).

So with interfaces we can enforce functional orthogonality into the 'real
world architecture' and produce an 'ahead of schedule' deliverable that
people find 'looks pretty good'.

The nice thing about 'cubing' and interfaces is the ability to
form a 'functional channel' that provides a backbone for a succession
of computing units (classes grouped together in some sort of a general
pattern to process some type of data).

Summary Visual : Of Point 1 and 2

Mi and Interfaces are like an hourglass with the derived class
as the center point (focal point of computing function).

Both can be used for aggregation and decomposition but their inherent
architecture belies their functional forte.

They compilement each other when the inherent functional forte of each is
clearly understood. This is conjecture of course (with no formal or
informal proofs).

Why no MI in C#?

I don't know (I love saying that).

What I suspect. From what little I've seen of
Anders in video (MSDN TV, Channel 9, etc) he;

Guess 1 : Anders - Is not opposed to MI above the covers
Guess 2 : Anders - Has what appears to be an intractable problem below the
covers.

Fear 1 : Anders - Does not have a theoretical analyst to run the numbers and
prove
(to him and team) the premise to back Guess 1 above.

Parting note : The monthly MI discussion in this forum.

Someone wondered if the same people keep posting about MI.

I don't know about others but I never have time for anything and I certainly
should not be doing this now :) This is my first and last post on MI because
it sums up everything I know (or conceivably will know) about MI vs Interface.

My motivation for responding to your post was 'it seems silly'.

An excellent observation if I may add.

Executives (in high pressure start ups) invariably recognize the following
personality types (they exist in all professional communities).

1. Motor mouth
2. Always has to be right
3. Always has to have the last word
4. ect (these are not pejorative but 'how to handle' labels)

I suspect that as Human Beings its takes us time (as a community) to
figure something out because we all (at one time or another) become less
than our ideals would have us be.

I personally don't care about MI being in C# Vx since I'm too busy with my
own forte/talent set.

I am (however) passionate about programming and the current 'state of the
art' discussed in this forum.

Your comment on 'silly' translates to the operational/managerial issues
which I trust are minimized by our conduct in this venue.

I never post on forums but I'm so thankful for Skeet, Joanna and many
others. In thanks I'd thought I'd put in my two cents (just this once) on
MI vs Interface.

I hope the above points on the complementary nature of MI and Interface
provides you with a different perspective of thought that helps you (and
the community) better understand the utility of MI and Interfaces.

Shawnk

PS. I always look forward to those who shoot holes in all the above ideas
as it always helps be better understand stuff. I NEVER take such input
personally (so fire away ;-).
 
M

Mike

Sorry not to insert these points in all the right places in your posts and
replies...

Some observations, from the hip:

(1) Classic MI (like SI) is an "is-a" relationship, i.e., implementation
inheritance. I think this is almost always wrong. An object is almost never
so many things at once -- it may look like different objects from various
angles, or behave like different objects in different contexts, but that
does not mean that its *identity* and *implementation* should be conflated
witth these concepts -- at least not by mandate. Classic "merged object" MI
is simply one cheap (conceptually, anyway) way of acheiving these goals.
Delegation to other objects can acheive the same results, and I think having
better support for explicit and implicit delegation is better than "object
merging" in most cases.

(2) I think MI is very powerful, but it can have large costs if not done
carefully. This, of course, is true for any language feature -- but there
are some things that are more easily abused than others. If you want "a
slice" of functionality, and use MI to get at it from a class that has this
slice, you end up with all the other baggage of that class. Not just
theroretical baggage, but depending on call-tree snipping by the compiler,
inlining, "v-table" design, late vs. early allocation, etc., real measurable
baggage. A really clever compiler could probably omit both members and code
not used by actual clients, assuming you never want to get at them via
reflection. I agree that better ways to slice-and-dice (whether it's AOP or
whatever) could be useful. This is mitigated by a framework where the
classes were designed from the begining to be used as "mix-ins", but use by
C# now could result is some truely questionable objects -- as I think ends
up being the case in MI in practice. Some baggage is worse in languages
where all fn's are "virtual" of course, but alot of powerful MI I've seen
stems from the abilty for an object to take on the behavoir of a mix-in
class at *run-time* by adding a superclass (consider adding a DB-persistance
manager to objects you don't have the source code and have them), and this
isn't likely to happen in c#, as a core feature anyway.

(3) I think the most elegant MI designs I've seen are probably in the
Lisp/CLOS and similar areas. This is probably because (a) people there tend
to think much more about the design at this level than any most programmers
(not a slight against most programmers -- but most programmers simply have
to get stuff done, and usually in some prescribed way), and (b) because
these languages actually let you control (via multi-method dispatch, macros,
meta-class programming, etc.) exactly how you want your system to work. Most
"mortal" programmers simply don't have the luxury of working on the
meta-meta level, the meta-level, and the problem level at the same time -
especially when the problem level may consist of muiltiple tiers itself. The
danger is that subseqent programmers may not be so clever that they can
design within this system and maintain it.

I'm not against MI (or for that matter meta-programming of some kind -- I
posted a few weeks ago about wishing the event-on-top-of-delgates mechanism
were done in a more general fashion) but such generalizations need to be
done carefully. I admit, it's pretty bad right now when an O/R mapper forces
you to make its class the "root" of all your objects, forcing you to
"insert" this layer into your hierarchy, because inhertance is a natural way
to approach this in lieu of any other language support -- an alternative way
is to "register" all the objects that require persistance with a manager
object, causing what is, in-effect, a "linked bi-object" (or "co-objects")
as far as the memory manager is concerned, so it would be more efficient to
just glue them togther as classic MI does - so you could conceivably have
delegation with MI efficiency. (The compiler, with or without user hints via
keywords or attributes could figure these mostly-overlapping "co-lifetimes"
out -- let's just say this memory-manager/GC feature is patent-pending by me
on this date:)) This "systems space" problem is one of those cases that may
not fall into the "almost always wrong" category. However, I think in
"problem space" this 80-wrong/20-right rule probably applies. I think MI
just needs to be taken to the next level before it goes mainstream again. I
agree that without extra support for "smart" delegation or some other
MI-like abstraction, that MI would be useful in the toolbox, as long it was
approached with caution.

thanks,
m
 
L

Lebesgue

Dear Mr. Shawnk,

I really enjoyed your posts on MI last few days. They were really
insightful and shown that you are a very knowledgeable person, with much
insight and experience. I would like to ask you about your point of view on
the following scenario:

Having a class with implicit conversion to another type, along with
delegation enabling some sort of MI in C#.

class A : C
{
//set of methods
public static implicit operator B(A a)
{
return new B(a);
}
}

class B
{
private A a;

//set of methods
public B(A a)
{
this.a = a;
}
}

I know this does not solve the issue, but along with delegation enabled by
maintaining reference to A in B (and possibly including and delegating
methods from A to B, this gives, in some scenarios, a "feeling" that one has
multiple inheritance (e.g. A is C and "kind of is" B). It has helped me a
few times while I thought I needed MI.
What's your opinion on this?
 
A

Adam Benson

Not worth much, I know, but I cast my vote with the MI-clan.

I spent several years programming in C++ and found MI in *some few* cases to
be the only thing for the job. You could get the same effect as an interface
with abstract base classes, so it wasn't as though that option weren't open
to you - but I found that in some cases MI was the only thing for the job.

Not having it forces you to cut and paste code - not good for reliability.

The other thing I desperately missed when I switched to the beta release of
C# (I've been developing in it since before the first release) was
templates. I know that's here now in V2, but I will mourn MI.

Cheers,

Adam.
========
 
J

Jon Skeet [C# MVP]

I sitll believe the demise of C# is inevitable for the above reasons and
will unfold in the future market space history for the above reasons.

The demise of C# is inevitable because of the lack of MI? I think
that's a real stretch. C doesn't have MI either - is the demise of C
inevitable too?

Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?
 
G

Guest

Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?

Jon,

Always a pleasure to hear from you!

I should do a partial retraction and say 'demotion' instead of demise.

Although some languages like APl (A Programming Language) exist as long as
our culture does their utility in present society is gauged as 'value to
software production' and 'market share' per year.

The social utility of most interest to me (an perhaps most engineers)
is the yearly contribution to 'new and improved' (love that phrase)
ways of computing expression (see metric 1 below).

To gauge demotion - a metric profile would be;

1. Current use as a percentage of languages in community conversations.
2. Current yearly contributions as a vehicle for computing expressions.
3. Historical contribution to computing
4. Current per year market share (in its native market) in sales
5. Current use as a percentage of all programmers (on earth)
6. Current use as a percentage of all software produced.

The 'demotion' (no longer demise) I unconsciously inferred (via the term
demise) is point 1 and 2 above. The other points follow from the first
two. Demise would be position totally existent in history (as say, APL, I
may be wrong here but you get the point - very small production community
with APL market wise).

Timescale:

If you'll indulge me :)

I speak (inherently) from the perspective of 'Thoughtscale'. That is to
say the exponential (currently unless some Atlantian event occurs) rise in
the thinking behind our current social technology. I can backtrack this
to answer 'timescale'.

The 'time of demotion' would span two seminal events in the way
we UNDERSTAND computing. Instead of 'years' I look at 'how we think
as a community'. This we can map to timescale since the timescale is
and effect of the thoughtscape (for lack of better term).

So for computing expression technologies...

Two seminal events occur in the (A) near term and in the (B) long term.

Near term is a 'market challenger' we shall call 'Language X'. Long term
is the ability of machines to think like men - real intelligence - not
artificial.

The long term perspective is given only to 'frame' the time period of C#
non-historical existence.

A. Social inception date : MI + Interface inclusive RTM release of Language
'X'.

In passing the market context for the market entry point of X would be;

-- Assumptive criterion : Language X has comparable core functionality
-- Other language offerings non-C# and non-X

B. Social inception data : RTM release of a 'compiler' driven by human
speech input.

The market context for this means that Von Nueman systems are completely
and embedded commodity in a higher level of technology (thinking
machines).

To focus on 'what and when' for the C# demotion I'll include the scenario
where my premise 'inevitable demotion' is wrong.

Scenario 1 : No demotion

C@ Vx (in future) includes MI with Interfaces AFTER INFORMAL AND FORMAL
PROOFS SHOW the value of MI inclusion.

In passing all C# competitors say 'DOOH!'.

Scenario 2 : Demotion across technology line Point A->B above in 'thoughscape'

Demotion is clear (to most everyone) 5 years after point A, the RTM
release of X.

ANSWER: (To your timescale request)...

Because of the impact of this forum (C# as primary vehicle for computing
expression evolution) on the computing thoughtscape I would say point A
happens in 3 to 7 years from today (a hipshot).

The 'impact' effects funnel out via market competitors reading this forum.

Hypothetical Scenario: (For seminal market event A)

Because the Chinese have cuniform (sp?) a FOUNDER finds a semantic
expressive style (for computing) that is contained in the core X artifacts
(used by role classes coded by the user community of language X). Social
context to off subject (Chinese entrepeneurs and startups) but you get the
idea of the true 'different path branch' in the computing thoughtscape
brought about by language X.

Summary : (Tongue in check :)

I admit language X accedence is dependent more on its fundamental
semantic utility than MI (its programming 'style'). The 'rate of X
accedence' hinges on;

1. True semantic utility
2. Incorporation of MI and Interface
3. C# never incorporating MI

I think 'language X' is inevitable because of what I find (in MY coding
style) based on points 1->3 (directly above) occurring.

The second part of your post (again thank you) is *imminent* demise is
point B. That is to say point B is the point of 'Imminent' (love your
term there) demise even if C# is still the market leader.

I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.

Thanks for your input.

Shawnk
 
G

Guest

One addendum:

Language X is envisioned to be the last (I know your laughing -its OK) or
close to the last 'human programming language'.

It would (ideally) be a complete, closed and correct summation (and
reduction) of computing expression in Von Nueman systems.

It would lead to pattern based role/artifact oriented virtual reality
that would provide a logical domain for thinking machines to operate on
(Machine generated code).

Post Language X offerings would be stylistic expressions but not have the
inherent impact of Language X. The Language X is the last hoorah.

Machine generated code would subsume the programing market space with a
more powerful tool set and expressive paradigm (logical thought).

So when I said demise/demotion I really meant the above as a context for
the historical closure of the MI vs Interface debate.

Shawnk
 
G

Guest

Mike,

It was a pleasure reading your response.

I agree with your thinking.
To summarize the train of thought.
---------------------------

Point (1) : Above the covers : Explicit/Implicit Delegation
Delegation to other objects can achieve the same results, and I think having
better support for explicit and implicit delegation is better than "object
merging" in most cases.

Point (2) : Below the covers : Real Baggage

A> ... you end up with all the other baggage ...
B> ... Not just theoretical baggage .... real measurable baggage
C> ... better ways to slice-and-dice ....
D> ... could result is some truly questionable objects ....
E> ... powerful MI .. stems from ... behavior of a mix-in class ...

Point (3) : Use case context : Below the covers : Observation

A> ... MI .. design at this level ..[vs].. have to get stuff done ..
B> ... control ... how ... system to work ..
C> ... luxury ..

Point (4) : Use case context : Luxury of a 'clean slate design' (paraphrase)

A> Most "mortal" programmers simply don't have the luxury of working on the
A> meta-meta level, the meta-level, and the problem level at the same time -

Point (5) : Quality of result : Danger

A> The danger is that subsequent programmers may not be so clever that they
can
A> design within this system and maintain it.

Point (6) : Under the covers : Functional state scope partitioning

A> An alternative way is to "register" all the objects

B> This "systems space" problem is one of those cases that may
B> not fall into the "almost always wrong" category
---------------------------

As you, Adam, myself and other have mentioned the 'use case context'
and 'quality of use' determine MI usefulness (disregarding below cover
issues).

As you may have guessed;

(1) My only concern is with ABOVE THE COVER USAGE (Point 1->3, and 6.A,B)
(2) I take contracts with a CLEAN SLATE and no legacy (Point 4.A)
(3) I have (the user needs) a natural TALENT FOR PARTITIONING systems (Point
5.A)

Within the use case domain of the three points above MI is just 'peachy'
(works good).

Anders seems to have serious problems with the Jitter target design along
the Point 6 state space problem you mentioned (just a passing hip shot).

Your articulate assessment summarized the market case for MI in C# quite well.

I want to thank you for helping me understand my position (basically the
same as yours I think) in the usage context of the three points just
above.

You really, however, hit the nail on the head when you said...

Point 7 : Use case definition : Next level of MI

A> just needs to be taken to the next level before it goes mainstream again.

C# V.X still has the opportunity to take MI to the 'next level'.

I think many of the C# user community (such as I) would love to see this
happen along the lines you mentioned (...smart..with caution).

Solving the under cover baggage and state space issues (Point 2, Point 6)
are what we pay Microsoft for (read that Anders and team :) in MSDN
licenses.

Being adaptive I'll (most likely) pick up a better language when (and if)
it comes along. I still consider C++ viable (probably next version) for
leading
edge work (just beyond the technology envelope)

However I would prefer a 'smart, and cautious' MI to be incorporated after
the contributions of LINQ (what a real Godsend that is) are absorbed into
the community.

Thank you so much for your excellent input.
I hope someone in the C# dev team reads it!

Shawnk
 
G

Guest

Lebesgue,

This is so good I want to 'play' with code for a few days and sleep on it.

I'll post next week in light of 'one last' issue not covered in this thread.

In the derived class (focal point of the MI expression) we need to create
new composite functionality. Case in point function Fcomp.

To simplify Fcomp needs F1 and F2 (incoming to focal point via MI). Fcomp
needs some instance space.

I just want to think over some issues I have with Fcomp situations using
your recommendation.

Will post next week.

Shawnk
 
H

Helge Jensen

Lebesgue said:
Having a class with implicit conversion to another type, along with
delegation enabling some sort of MI in C#.

One drawback of this workaround is that it breaks
runtime-typedispatching and the usual type-inspections.
I know this does not solve the issue, but along with delegation enabled by
maintaining reference to A in B (and possibly including and delegating
methods from A to B, this gives, in some scenarios, a "feeling" that one has
multiple inheritance (e.g. A is C and "kind of is" B). It has helped me a
few times while I thought I needed MI.

Personally i think of this as "implicitly adapting by conversion" which
may be good enough for some cases, but it doesn't buy
implementation-inheritance, so it's no good for cherry-picking
implementation.
 
G

Guest

Excellent article. Loved the 'keep it working pattern'.

Reminds me of the executive summary approach for;

1. Quality (good)
2. Cost (cheap)
3. Time (fast)

The hip shot response is 'you can choose just one'.
The reality is you have to prioritize all three in a shifting field
of tactical and strategic forces.

Thank you again for the article link.

Shawnk
 
G

Guest

Reminds me of the executive summary approach for;
1. Quality (good)
2. Cost (cheap)
3. Time (fast)

Passing note:

Of course on a project per project basis the budget/schedule freeze the
force field (somewhat and just enough) to allow a reasonable priority
of QCT to be established.

The executive summary being the project will be 'done right' or
'done cheap' or 'done fast'. Tatical project tendency is 'done cheap/fast' and
strategic project tendency is 'done right'.

Shawnk
 
B

Bruce Wood

What I've always loved about that paper was that Foote and Yoder were
the only academics I had seen who examined the Big Ball of Mud
architecture from the point of view that business programmers aren't
uninformed louts, so there must be good reasons why people build these
systems. As a result they came up with some telling insights.

The original version of the paper was unrepentantly sympathetic to the
business programmer's plight. The current version has been altered
somewhat to play more to the academic audience, so now the original
lives on only in my file drawer. Too bad: it was better before the
revisions. :)

Anyway, it's one of my favourite research papers because it's so true
and so real. Glad you enjoyed it.
 
J

Jon Skeet [C# MVP]

I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.

So your conjecture is that C# won't be the dominant language in 36
years. I don't think that's particularly surprising, and I don't think
it's got anything to do with MI. *No* programming language has been
dominant for 36 years (C itself is only 34 years old). It's a bit like
saying, "I don't think that person will live to be 200 years old
because they're left-handed."

Indeed, I would be surprised if object-orientation as we think of it
now is the dominant paradigm in 36 years (or earlier than that).
 
J

Jon Skeet [C# MVP]

Shawnk said:
One addendum:

Language X is envisioned to be the last (I know your laughing -its OK) or
close to the last 'human programming language'.

That suggests that there's the possibility for one language to be the
most suitable one for all situations. I don't think that's feasible,
myself - there will always be a call for low level languages, and a
separate need for high level languages. Whether there will always be
scripting languages separate from statically compiled languages, I
don't know but I wouldn't be surprised. I suspect there are likely to
be further divisions we haven't even considered now.
 
I

Ian Semmel

Jon said:
I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.


So your conjecture is that C# won't be the dominant language in 36
years. I don't think that's particularly surprising, and I don't think
it's got anything to do with MI. *No* programming language has been
dominant for 36 years (C itself is only 34 years old). It's a bit like
saying, "I don't think that person will live to be 200 years old
because they're left-handed."

Indeed, I would be surprised if object-orientation as we think of it
now is the dominant paradigm in 36 years (or earlier than that).

Computer languages and programming paradigms are to a large extent driven by
technology.

The implementation of C# is really only possible because of the massive amounts
of memory and computing power that we now have compared to 40 years ago.

In 36 years time - who knows ? When we have the equivalent of several
tera-zillion bytes of molecular memory, perhaps we won't need computer languages.

I'm pretty sure that programmers won't be sitting at terminals tapping out code.
 
M

Mike

Ian Semmel said:
Shawnk said:
Arguably the demise of any particular language is at least very likely
- what are the chances that we'll be using Ruby, Java, Python or even C
in a thousand years' time? Now, if you want to talk about an *imminent*
demise, that's a different matter - care to put a timescale on when you
think C# will stop being used?


I conjecture at best 2012 (five years from 2007) for point B but it may
take as long as 3 more 'career time' generations (3*12) or 36 years from
today.

[Career generations the 12 years where workers are really hot and
productive]

That would put the horizon at (today plus 36) 2042.


So your conjecture is that C# won't be the dominant language in 36 years.
I don't think that's particularly surprising, and I don't think it's got
anything to do with MI. *No* programming language has been dominant for
36 years (C itself is only 34 years old). It's a bit like saying, "I
don't think that person will live to be 200 years old because they're
left-handed."

Indeed, I would be surprised if object-orientation as we think of it now
is the dominant paradigm in 36 years (or earlier than that).

Computer languages and programming paradigms are to a large extent driven
by technology.

The implementation of C# is really only possible because of the massive
amounts of memory and computing power that we now have compared to 40
years ago.

In 36 years time - who knows ? When we have the equivalent of several
tera-zillion bytes of molecular memory, perhaps we won't need computer
languages.

I'm pretty sure that programmers won't be sitting at terminals tapping out
code.

I hope not, at least not like today. Ignoring hard AI, natural language and
molecular memory, the basic methods of programming really haven't changed
since its conception. Semi-smart contextual help (Intellisense, etc.),
"visual tools", OOP, etc., are small refinements to the basic model.
("Managed memory" systems are probably the biggest leap in productivity,
IMHO.)
It's amazing really -- almost all of programming (more today than ever) is
just plumbing -- or connecting pieces and fixing "impedence mismatches". But
still, we start from scratch almost every time - even code generators get
you only to step 1 and are usually one-way affairs. For a long long time
we'll probably need a certain amount of low-level coders to bootstrap new
systems and make the new breed of systems, but I'd think more of the
"plumbing" code will probably be taken over by dynamical systems with dials
that let you treak mappings and implementations, both before and during the
lifetime of the entities involved.

m
 
G

Guest

Jon,

I strongly agree with you. There will always be many languages. Language 'X"
would be a dominant (if not the dominant) market leader with the metrics of
leadership being.

1. Market size
2. General utility (most application areas)
3. Most contributions to leading the computing envelop.

For example, C++, java, C#, Basic, Cobol, Fortran, XML, HTML, are all
important languages with contributions is their respective areas of focus.

But, IMHO, C# is my favorite with the exception of the MI feature of C++.
(thats a personal preference. So I still think a 'Language X' market shift
(as per this thread) is quite feasible.

So, I agree

1) MOST suitable for ALL occastions - No.
2) MOST suitable for MOST occastions - Yes

(comparing Language X to other contenders for the MOST/MOST metric).

As always Jon, a true pleasure to here your thoughts.

Shawnk
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top