C# Plugin system - same interface in two different assemblies...

W

WTH

I am now aware (I am primarily a C++ developer) that in C# if you reference
the same interface from the same file in two different projects the types
are actually incompatible.

I found this out because I have written a generic plugin system for my
current and future C# needs. I defined a base plugin system interface named
IPlugin (original, I know...) which contains some basic plugin infomration
all plugins of this system must expose (name, friendly name, author,
version, GUID, et cetera) and this interface is defined in the plugin
system's C# file and in its namespace (Common.Utilities.PluginSystem.) Then
I have an application, let's call it host.exe which references the plugin
system's namespace and (of course) the 'host' project references the plugin
system's file. The other project is called 'MyPlugin' and it also
references the plugin system's namespace and its project also references the
plugin system's file.

When you compile and build both projects (we haven't even defined an
interface which inherits from IPlugin yet, lol) and run the host.exe
application, the host creates an instance of the plugin manager and tells it
to load the dll produced by the MyPlugin project. No problems. The plugin
manager loads the assembly, checks to see if there are any types of the
interface type requested (host asks the plugin manager to find the interface
type 'IPlugin') and it does. It then makes sure there's a class type which
implements the same interface, and it finds that as well. So far,
everything looks good. The plugin system has found a class that implements
the 'correct' interface in the newly loaded assembly, so it creates an
instances of that type using the activator to create an instance of the type
found in the assembly which matched the criterial we specified ('a class
implementing an interface called IPlugin'.)

Here's where the problem beings. The activator succeeds and returns a type
implementing 'IPlugin' but the returned object handle cannot be case to
IPlugin in host.exe because it throws an exception stating the this is an
invalid cast. Looking at the returned object handle during debugging shows
everything I would expect, and object that implements IPlugin as I had
hoped.

After looking into this a bit I've discovered that the runtime does more
than it appears to when validating types. The assembly name is taken into
account. This means that the runtime considers the interface IPlugin, which
is of course defined in both assemblies so that each assembly can use it, to
be different in each assembly.

Apparently the 'solution' (it really seems like a hack honestly) is to put
IPlugin (and presumably the plugin manager as well while we're at it) in its
own assembly and ensure that the product/tool/whatever remembers to include
it during deployment, and ensure that every developer that wishes to write a
plugin has a copy of that assembly as well.

That's bad enough, but because this plugin system should be generic, a
developer would want to be able to define a new interface in their
tool/product/whatever that derived from IPlugin, for instance
ISuperCoolProductPlugin, and then allow some other developers (potentially
3rd party developers) to be able to create plugins of that type.

Sadly, this means YET ANOTHER assembly must be created, versioned,
supported, and deployed with the first developer's application and given to
the 3rd party to use.

Developer #1 can't put the new interface in the plugin manager's assembly
(which holds IPlugin) because it isn't his to modify, ergo, you have at
least 3 binaries to distribute to run SuperCoolProduct.exe one of which only
holds the interface ISuperCoolProductPlugin.

That seems ridiculous and very short sighted. Given the enormous
capabilities of reflection and the runtime, how come it can't map the
interfaces properly given that they have THE SAME NAMESPACE... lol. You
would think they would attribute this (or have they already?)

I'm relatively new to C# so I'm wondering if my approach to plugins is
outdated (pre 2.0) or I've missed something like generics and interface
covariance...?

This issue with types in assemblies (especially interfaces) always being
incompatible seems to be a very big issue in regards to code re-usability.

Anytime two assemblies plan to work together and reference the 'same type'
they can't.

Am I using the wrong approach (vanilla interfaces)?

WTH

P.S. BTW, please don't see this as my disrespecting C# because I'm not, it's
fantastic, but so is C++ and that hairy mother has some serious warts - C#
has a few bumps as well, lol. I really just want to write a generic,
re-usable plugin system that has as few deployment dependencies as possible.
 
L

Lasse Vågsæther Karlsen

WTH said:
I am now aware (I am primarily a C++ developer) that in C# if you
reference the same interface from the same file in two different
projects the types are actually incompatible.
<snip lengthy explanation>

You got one option: Make a separate assembly declaring the interfaces
you want to be common.

You say it looks like a hack, but .NET is about security and guarantees
in this regard, and the only way to ensure that two interfaces are the
same is to enforce that they are indeed the same. Not just that they
look the same.

So you need the following assemblies (at least):

1. Application host
2. Plugin types
3. Plugin

The host and the plugin both need to reference the common type assembly,
where your interfaces are declared.

Unless the plugin needs to talk to the plugin manager in any way (ie.
use its types), then you can put that in the host application. I'd
probably put that in the external assembly as well but at least here you
can choose.

Note that in .NET 3.5 there is a new plugin system that might define
these common interface for you. I have not looked into it yet though so
I can't really say what it would solve for you, but since Microsoft took
the effort to build a plugin system that supports discoverability,
extensibility, and err... unloadability..., then I think it might do
what you need.

If you use .NET 3.5 and Visual Studio 2008 that is.
 
J

Jeroen Mostert

WTH said:
I am now aware (I am primarily a C++ developer) that in C# if you
reference the same interface from the same file in two different
projects the types are actually incompatible.
Yes. Type compatibility in .NET is binary. If structurally identical types
reside in different assemblies, they are different types. This is by design.

If you want to share a type, share its assembly.
After looking into this a bit I've discovered that the runtime does more
than it appears to when validating types. The assembly name is taken
into account. This means that the runtime considers the interface
IPlugin, which is of course defined in both assemblies so that each
assembly can use it, to be different in each assembly.
Exactly.

Apparently the 'solution' (it really seems like a hack honestly) is to
put IPlugin (and presumably the plugin manager as well while we're at
it) in its own assembly and ensure that the product/tool/whatever
remembers to include it during deployment, and ensure that every
developer that wishes to write a plugin has a copy of that assembly as
well.
This is not a hack. This is the way things are supposed to work. It's the
only way you could guarantee that the types are compatible, without
requiring fully dynamic types from the runtime. However, you're programming
against an interface and you don't need the actual type definition, so all
is not lost. Read on.
That seems ridiculous and very short sighted. Given the enormous
capabilities of reflection and the runtime, how come it can't map the
interfaces properly given that they have THE SAME NAMESPACE... lol.
You would think they would attribute this (or have they already?)
This isn't so ridiculous when you start to think what the runtime would have
to do to ensure that what you want actually *works*. What if I define a
completely different IPlugin in my assembly that occupies the same
namespace? The runtime cannot just assume that everything is probably fine
and then have the application crash on an incompatible object layout.
This issue with types in assemblies (especially interfaces) always being
incompatible seems to be a very big issue in regards to code re-usability.
Well, it's not. The only issue here is that you always need to supply the
bridging components in a shared assembly. Typically, that would be a
GAC-installed assembly.
Anytime two assemblies plan to work together and reference the 'same
type' they can't.

Am I using the wrong approach (vanilla interfaces)?
You can get around this by using COM. Make IPlugin a COM interface and put
it a type library. Although all developers will need that type library to
develop against, you won't need to distribute it along with the executable.

However, you will need to write the interop wrapper yourself. The
automatically generated interop wrappers are put in a separate assembly,
which is just what you don't want.

The .NET documentation has plenty of information on COM interoperability. It
works between .NET applications as well as it does between others.
P.S. BTW, please don't see this as my disrespecting C# because I'm not,
it's fantastic, but so is C++ and that hairy mother has some serious
warts - C# has a few bumps as well, lol. I really just want to write a
generic, re-usable plugin system that has as few deployment dependencies
as possible.

What you want would work in C++, because C++ doesn't do runtime checks of
any kind. Even then, it would only work if the compilers produced compatible
object files. If not, then prepare for crashes.
 
W

WTH

Lasse Vågsæther Karlsen said:
<snip lengthy explanation>

You got one option: Make a separate assembly declaring the interfaces you
want to be common.

My post mentions this method.
You say it looks like a hack, but .NET is about security and guarantees in
this regard, and the only way to ensure that two interfaces are the same
is to enforce that they are indeed the same. Not just that they look the
same.

That's not even true for classes much less interfaces. You can match up,
byte for byte types in IL. C# just doesn't appear to let you do this. This
is why I'm looking for alternatives (especially given that I don't know as
much about C# as I should.)
So you need the following assemblies (at least):

1. Application host
2. Plugin types
3. Plugin

The host and the plugin both need to reference the common type assembly,
where your interfaces are declared.

You didn't bother to read my post obviously. I have already explained that
this is the problem, having to do it this way. It works programmatically,
but it results in a poor software engineering implementation. I am
currently using this method.
Unless the plugin needs to talk to the plugin manager in any way (ie. use
its types), then you can put that in the host application. I'd probably
put that in the external assembly as well but at least here you can
choose.

I've written quite a few plugin systems and it will be a great value going
forward for the plugin system to reside in the assembly that 3rd party devs
will use (especially given that quite often 3rd party developers want their
plugins to be able to communicate to each other via the 'framework' of the
plugin system.)
Note that in .NET 3.5 there is a new plugin system that might define these
common interface for you. I have not looked into it yet though so I can't
really say what it would solve for you, but since Microsoft took the
effort to build a plugin system that supports discoverability,
extensibility, and err... unloadability..., then I think it might do what
you need.

If you use .NET 3.5 and Visual Studio 2008 that is.

A new plugin system? Do you mean a new assembly loading/unloading
mechanism? Interesting. Thanks for the tip.

WTH
 
W

WTH

Apparently the 'solution' (it really seems like a hack honestly) is to
This is not a hack. This is the way things are supposed to work. It's the
only way you could guarantee that the types are compatible, without
requiring fully dynamic types from the runtime. However, you're
programming against an interface and you don't need the actual type
definition, so all is not lost. Read on.

"This is the way things are supposed to work" - That's very apologistic.
It's supposed to work that way because it does? No offense but types are
already treated as if they were fully dynamic and assemblies contain entire
type definitions including inheritance, ergo, it is EASY for the runtime to
recursively evaluate the types. I could even write it myself in C#,
matching up interfaces and then matching up methods from those interfaces on
the classes which inherit those interfaces. I am even considering doing so,
when I've got the time (which I currently do not.)
This isn't so ridiculous when you start to think what the runtime would
have to do to ensure that what you want actually *works*. What if I define
a completely different IPlugin in my assembly that occupies the same
namespace? The runtime cannot just assume that everything is probably fine
and then have the application crash on an incompatible object layout.

I don't care if the runtime thinks they're different, that's simply the
progenitor of the problem, I care that the runtime won't let me cast two
completely identical interfaces that only differ by assembly - they have the
same methods, the same signatures, they take the same simple types - even if
they used complex types the runtime could match the types, et cetera ad
nauseum ad infinitum. Again, a user can write the C# code to do much the
same thing, but the runtime can't? I think it should. Apparently in 3.5
there are changes to alleviate this shortcoming.
Well, it's not. The only issue here is that you always need to supply the
bridging components in a shared assembly. Typically, that would be a
GAC-installed assembly.

It certainly is. Software Engineering is a discipline of success through
simplicity.

Is it simpler for developers to (a)not use a shared assembly or (b)use a
shared assembly? a.
Is it simpler for developers using an assembly that defines an interface to
(a)create another shared assembly in order to derive from the interface in
the first assembly or to (b)not have to do that? b.
Is it (a)more or (b)less complicated for system deployment, configuration,
support, migration to have more binary dependencies? a.
Is it (a)more or (b)less complicated for a build manager to support
building/versining a product suite that has more projects rather than less?
a.

It's pretty simple really. The issue is just with the 'degree' of
inconvenience/complication involved. For a small team, it's not a big deal,
for a large team it rapidly can become a big deal.
You can get around this by using COM. Make IPlugin a COM interface and put
it a type library. Although all developers will need that type library to
develop against, you won't need to distribute it along with the
executable.

Ugh... I do appreciate the alternative angle (seriously :)), that
introduces new complexities (not for me as I've actually help implement the
COM/DCOM subsystem on Irix [don't ask].) At least, that's my initial
reaction. I'll think on it more. VERY much appreciate the suggestion
though.
However, you will need to write the interop wrapper yourself. The
automatically generated interop wrappers are put in a separate assembly,
which is just what you don't want.

The .NET documentation has plenty of information on COM interoperability.
It works between .NET applications as well as it does between others.


What you want would work in C++, because C++ doesn't do runtime checks of
any kind. Even then, it would only work if the compilers produced
compatible object files. If not, then prepare for crashes.

? I'm not sure what you mean, as GCC, Intel, and Microsoft's compilers all
produce compatible plugins for several products in our product suite. They
work just fine.

WTH
 
J

Jon Skeet [C# MVP]

Is it simpler for developers to (a)not use a shared assembly or (b)use a
shared assembly? a.

Look at it a different way:

Is it simpler for developers to (a) duplicate code or (b) not
duplicate code? b.

Using a common assembly avoids code duplication.

Jon
 
J

Jon Skeet [C# MVP]

So you need the following assemblies (at least):

1. Application host
2. Plugin types
3. Plugin

Not necessarily. The plugin types can be within the application host
assembly, which the plugin then references. This works fine, and I've
done it many times.

Jon
 
J

Jeroen Mostert

WTH said:
"This is the way things are supposed to work" - That's very apologistic.

I'm sorry. Ha!
It's supposed to work that way because it does? No offense but types
are already treated as if they were fully dynamic and assemblies contain
entire type definitions including inheritance, ergo, it is EASY for the
runtime to recursively evaluate the types.

Sure. It's also costly. If we go for consistency, those typing rules would
need to apply to every single type, presumably down to the primitive types
that you can't define. We're talking a completely different approach to
typing here. I find it easy to see why the .NET designers went with the
tried and true approach of just checking a unique identifier for each type
(that is, full type name, including assembly version). More intricate type
systems that fully accommodate "duck typing" are certainly possible, but
TANSTAAFL.
I don't care if the runtime thinks they're different, that's simply the
progenitor of the problem, I care that the runtime won't let me cast two
completely identical interfaces that only differ by assembly - they have
the same methods, the same signatures, they take the same simple types -
even if they used complex types the runtime could match the types, et
cetera ad nauseum ad infinitum. Again, a user can write the C# code to
do much the same thing, but the runtime can't? I think it should.
Apparently in 3.5 there are changes to alleviate this shortcoming.
This discussion is extremely old. What's the identity of a type? Is it its
name, its structure, a combination of the two? Every language needs to pick.
You disagree with the choice the designers made. Which is fine, but they
didn't make the choices they did just to spite you. They went for "easy to
understand" and "will never go wrong".
It certainly is. Software Engineering is a discipline of success
through simplicity.
Simplicity is in the eye of the beholder.

I find the rule that a type's identity is fully determined by the identity
of the assembly it's contained in plus the type's name very simple.

That it happens to be a rule that, for your purposes, turns out to be *too*
simple is another matter.
You can get around this by using COM. Make IPlugin a COM interface and
put it a type library. Although all developers will need that type
library to develop against, you won't need to distribute it along with
the executable.

Ugh... I do appreciate the alternative angle (seriously :)), that
introduces new complexities (not for me as I've actually help implement
the COM/DCOM subsystem on Irix [don't ask].) At least, that's my
initial reaction. I'll think on it more. VERY much appreciate the
suggestion though.
You needn't fret much. .NET has very good support for COM and it almost
completely handles all the awful COM stuff you have to swallow in "less
refined" environments like C++ (or, God forbid, C). It's *nearly*
transparent, apart from the need to hang a GUID on your interface and tag it
as COM-visible. The QueryInterface stuff will all take place in the background.

I know what I'm talking about, since I've glued up managed and unmanaged
code through COM and this is really easy (compared to having to marshal
everything explicitly). In fact, I've done something very similar to
IPlugin, except that I am using a separate interop assembly because I don't
care about the extra deployment.

If I did care, I could have eliminated it by supplying the managed
definition as a single source file and demanding that everyone copy that,
but, guess what, we think just reusing the same assembly is *simpler*. :)
? I'm not sure what you mean, as GCC, Intel, and Microsoft's compilers
all produce compatible plugins for several products in our product
suite. They work just fine.
That's because they have had to converge to make it work. As long as you
don't get subtle with multiple inheritance, method pointers or exceptions,
those compilers will all produce the same low-level binary stuff to make
sure it'll work with the other compilers. This is also the foundation of
COM. Any option that influences the way the compiler lays out vtables will
break it, though. They work "just fine" because people before you suffered.

More to the point, in C++ you've got no way of ensuring that it'll actually
work. You're counting on the compilers to produce binary-compatible object
files. If for any reason they don't, you won't get a ClassCastException but
an access violation (or data corruption). The high priority the .NET
designers put on avoiding this sort of thing was one impetus for a simple,
easily verified type system.
 
W

WTH

Jon Skeet said:
Look at it a different way:

Is it simpler for developers to (a) duplicate code or (b) not
duplicate code? b.

Using a common assembly avoids code duplication.

What duplication are you referring to? A developer cutting and pasting an
interface from your SDK documentation? A developer using a skeleton plugin
project you've made available? Worst case, a user typing out the interface
definition?

It is always important to think of engineering choices in the simple,
moderate, and complex usage choices, not just the simple.

Imagine that you've got a team of 20 developers developing a large networked
product suite (as I recently did) and they are broken up into basically 3
groups, each group with many subcomponents (services, client applications,
tools, open integration points - i.e. plugin options, for each group), each
of these groups wants to use a common plugin system for all the basic
engineering reasons (testing/debugging/extending all simpler.) Suddenly you
find yourself with DOZENS of assemblies that simply exist to hold interface
definitions so that people can write plugins for the different parts of the
product. You can't put them all in one big interface assembly because then
whenever you add a new interface your QA team must run regression tests on
the entire product suite and their plugins. You also can't do it because if
you update a deployment or migration in the field you're potentially
affecting the entire product suite instead of just the component which has
changed. There's a bunch of reasons why using a single assembly for
anything shared in commonality between assemblies is a poor idea instead of
there being a method of equanimity.

I personally find it much more attractive to simply tell somebody working at
corporate research in Princeton "The interface is published in the SDK
documentation you can reach online" rather than "you need to download this
assembly, reference it, and make sure it's in your path, oh, and if you want
to use that other interface, download this other assembly, reference it, and
make sure it's in your path... Et cetera."

Now, the 20 developer example I give is a complex usage example, but it is a
very real example for me.

Again, I'm not bagging on C#, this is just something I have strong
engineering reasons to see as a 'wart.' All languages have them, especially
C++ (which is my primary background.)

WTH
 
W

WTH

Jeroen Mostert said:
I'm sorry. Ha!

LOL :)
Sure. It's also costly. If we go for consistency, those typing rules would
need to apply to every single type, presumably down to the primitive types
that you can't define.

It doesn't have to be by default. There's no reason you coulnd't have an
attribute on a type that specified that the type in question should be
eligible for this type of treatment.
We're talking a completely different approach to typing here. I find it
easy to see why the .NET designers went with the tried and true approach
of just checking a unique identifier for each type (that is, full type
name, including assembly version). More intricate type systems that fully
accommodate "duck typing" are certainly possible, but TANSTAAFL.

I can understand that, but then why does reflection exist at all except to
allow, when necessary, the slow and methodical evaluation of types for the
purpose of understanding their usages and capabilities...? That's all I'm
saying. There doesn't have to be a stop to using the UID that currently
exists, again, I don't care if C# considers them different IF it allows one
to be treated like another due to total compatability. Whether assuring
total compatability or not is immaterial because it doesn't have to be the
default course of action for the runtime.
This discussion is extremely old. What's the identity of a type? Is it its
name, its structure, a combination of the two? Every language needs to
pick. You disagree with the choice the designers made. Which is fine, but
they didn't make the choices they did just to spite you. They went for
"easy to understand" and "will never go wrong".

Again, I don't care if they are identified as different. I care that the
runtime cannot recognize that they are totally compatible types, that they
can be cast to one another or used identically. That's what I care about.
Now, the compiler identifying them as the same, is not possible (for
security reasons), so I'm not interested in them being considered the same
definition, I'm interested in them being considered two different
definitions of the exact same thing and therefore interchangeable.
Simplicity is in the eye of the beholder.

I find the rule that a type's identity is fully determined by the identity
of the assembly it's contained in plus the type's name very simple.

I don't have a problem with that, I have a problem with the runtime not
being able to recognize that I should be able to cast
MyPluginAssembly.IPlugin to MyHostExe.IPlugin without throwing an exception.
That it happens to be a rule that, for your purposes, turns out to be
*too* simple is another matter.

? The rule you wrote above is fine, it's the absence of something so simple
as matching types that I find problematic.
Anytime two assemblies plan to work together and reference the 'same
type' they can't.

Am I using the wrong approach (vanilla interfaces)?

You can get around this by using COM. Make IPlugin a COM interface and
put it a type library. Although all developers will need that type
library to develop against, you won't need to distribute it along with
the executable.

Ugh... I do appreciate the alternative angle (seriously :)), that
introduces new complexities (not for me as I've actually help implement
the COM/DCOM subsystem on Irix [don't ask].) At least, that's my initial
reaction. I'll think on it more. VERY much appreciate the suggestion
though.
You needn't fret much. .NET has very good support for COM and it almost
completely handles all the awful COM stuff you have to swallow in "less
refined" environments like C++ (or, God forbid, C).

Hehe, does this send shivers down your spine? lpVtbl->
It's *nearly* transparent, apart from the need to hang a GUID on your
interface and tag it as COM-visible. The QueryInterface stuff will all
take place in the background.

I know what I'm talking about, since I've glued up managed and unmanaged
code through COM and this is really easy (compared to having to marshal
everything explicitly). In fact, I've done something very similar to
IPlugin, except that I am using a separate interop assembly because I
don't care about the extra deployment.

If I did care, I could have eliminated it by supplying the managed
definition as a single source file and demanding that everyone copy that,
but, guess what, we think just reusing the same assembly is *simpler*. :)

It still has problems in a product suite environment, due to forced
aggregation of the interfaces, but there may be ways around that.
That's because they have had to converge to make it work. As long as you
don't get subtle with multiple inheritance, method pointers or exceptions,
those compilers will all produce the same low-level binary stuff to make
sure it'll work with the other compilers. This is also the foundation of
COM. Any option that influences the way the compiler lays out vtables will
break it, though. They work "just fine" because people before you
suffered.

Your argument doesn't make sense to me. You're trying to say its a bad
approach because it has forced compiler writers to agree to calling
conventions and function table layouts? No offense, but there are a myriad
of other reasons why these things have occurred and why it is important that
they produce compatible code and none of them have to do with plugins,
primarily the compiler's ability to produce code that can be use in a DLL.
Nothing to do with type safety.
More to the point, in C++ you've got no way of ensuring that it'll
actually work. You're counting on the compilers to produce
binary-compatible object files. If for any reason they don't, you won't
get a ClassCastException but an access violation (or data corruption). The
high priority the .NET designers put on avoiding this sort of thing was
one impetus for a simple, easily verified type system.

Again, you're creating what, as far as I can tell, is a specious argument.
I don't have to ensure it will work, I don't have to count on the compiler
to produce binary compatible code (even though they have since the
introduction of DLLs many many years ago - and somewhat before that relating
to calling conventions and linkers.) The only difference between the C++
exception and the C# exception is that .NET defines the exception for me,
doing it in C++ to handle this case is trivial, but I certainly don't have a
problem telling plugin developers - "uh, please actually run your code once
in order to see if it works."

Personally I think the .NET designers simply avoided the issue out of the
idea that they had more important concerns to address first. I doubt it was
a philosophical decision to make two items which are identical in memory
incompatible just because the definitions of those types are contained in
different places (leading to different fully qualified names), unlike the
decision to avoid multiple inheritance (except that it is support for
interfaces...)

WTH
 
J

Jon Skeet [C# MVP]

WTH said:
What duplication are you referring to? A developer cutting and pasting an
interface from your SDK documentation? A developer using a skeleton plugin
project you've made available? Worst case, a user typing out the interface
definition?

All of those situations mean you've got the code in two places. That
doesn't sound like good practice to me.
It is always important to think of engineering choices in the simple,
moderate, and complex usage choices, not just the simple.

Imagine that you've got a team of 20 developers developing a large networked
product suite (as I recently did) and they are broken up into basically 3
groups, each group with many subcomponents (services, client applications,
tools, open integration points - i.e. plugin options, for each group), each
of these groups wants to use a common plugin system for all the basic
engineering reasons (testing/debugging/extending all simpler.) Suddenly you
find yourself with DOZENS of assemblies that simply exist to hold interface
definitions so that people can write plugins for the different parts of the
product. You can't put them all in one big interface assembly because then
whenever you add a new interface your QA team must run regression tests on
the entire product suite and their plugins.

No need. Just put each plugin in its host's assembly, and reference
that. If you have many plugins, you have complexity whatever you do.
Duplicating source code just makes that worse, rather than better.
You also can't do it because if
you update a deployment or migration in the field you're potentially
affecting the entire product suite instead of just the component which has
changed. There's a bunch of reasons why using a single assembly for
anything shared in commonality between assemblies is a poor idea instead of
there being a method of equanimity.

But code duplication is A-OK by you is it? Sorry, I really don't buy
it.
I personally find it much more attractive to simply tell somebody working at
corporate research in Princeton "The interface is published in the SDK
documentation you can reach online" rather than "you need to download this
assembly, reference it, and make sure it's in your path, oh, and if you want
to use that other interface, download this other assembly, reference it, and
make sure it's in your path... Et cetera."

The latter is *much* better IMO.
Now, the 20 developer example I give is a complex usage example, but it is a
very real example for me.

Again, I'm not bagging on C#, this is just something I have strong
engineering reasons to see as a 'wart.' All languages have them, especially
C++ (which is my primary background.)

I'm afraid I see having to duplicate source code as a much bigger wart
than compiling against a fixed binary.
 
J

Jeroen Mostert

WTH said:
It doesn't have to be by default. There's no reason you coulnd't have
an attribute on a type that specified that the type in question should
be eligible for this type of treatment.
OK, that's an interesting solution. Yes, they could have included something
like that. They still can if we ever get around to CLR 3.0.
I can understand that, but then why does reflection exist at all except
to allow, when necessary, the slow and methodical evaluation of types
for the purpose of understanding their usages and capabilities...?

Reflection is just dandy, but it's no basis for a type system of a
statically typed language. It's a "hey, if you can't do it that way, there's
always reflection, I mean, if you have to" solution. And for sure, you can
solve the problem you've got with reflection too. It would be horrible, but
you could do it.
Again, I don't care if they are identified as different. I care that
the runtime cannot recognize that they are totally compatible types,
that they can be cast to one another or used identically. That's what I
care about. Now, the compiler identifying them as the same, is not
possible (for security reasons), so I'm not interested in them being
considered the same definition, I'm interested in them being considered
two different definitions of the exact same thing and therefore
interchangeable.
Ah, I see what you're getting at now. You're not interested in unifying the
types, you want the runtime to transparently marshal between these nominally
different types, after recognizing that this is possible. This is
essentially what you achieve with COM by saying "yes, I implement this
interface with this UID too", but then you want it by structural
compatibility, not an agreed-upon UID.
Hehe, does this send shivers down your spine? lpVtbl->
"Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" sounds less scary.
Your argument doesn't make sense to me. You're trying to say its a bad
approach because it has forced compiler writers to agree to calling
conventions and function table layouts?

No, I'm just saying it can't be compared to the .NET approach because nobody
in the C++ world is required to play by any rules. If it happens to work, it
happens to work, and if it doesn't happen to work it's the programmer's task
to clean up. .NET just gives you things that will work, and it will protect
you from the things that wouldn't work. The designers will naturally be
inclined to be conservative with such an outlook, instead of giving you
everything that could have been useful.
Personally I think the .NET designers simply avoided the issue out of
the idea that they had more important concerns to address first. I
doubt it was a philosophical decision to make two items which are
identical in memory incompatible just because the definitions of those
types are contained in different places (leading to different fully
qualified names), unlike the decision to avoid multiple inheritance
(except that it is support for interfaces...)
Well, sure, they didn't *make* them incompatible (except indirectly), but
the critical difference with C++ is that it would have required actual
*work* (instead of just aligning things a bit and having it fall into place
by virtue of the object layout, like the C++ developers could do -- things
are more flexible there).

The objects aren't "identical in memory". The *interfaces* are identical,
the objects implementing them presumably are not. The interfaces don't have
an in-memory representation, though, except by virtue of the vtable layout
(or the CLR equivalent, really). Unlike C++, you'll need to write down when
two types are and are not compatible, and the runtime will need to check
things before you ever make a call. And yes, they didn't actually build a
system that included marshalling by signature.

I'd say it's no so much avoiding the issue as it is thinking that the system
they came up with sufficed. You're right inasmuch that the words "plugin
system" probably didn't come up during the design phase, though, or they
would have made some accommodations (matching up types is just one of the
problems; unloading plugins is another one of those things).
 
W

WTH

Jon Skeet said:
All of those situations mean you've got the code in two places. That
doesn't sound like good practice to me.

? Do you mean it's defined twice? But that's the whole point. Define it
where and when you need it, don't force people to have it defined all the
time like you would if they have to include an assembly to get one interface
and that assembly contains 30 other interface definitions. The code is
still there. If you simply put one interface per assembly you're only
including in the developers code once but then they have to have all the
assemblies which they 'may' use...
No need. Just put each plugin in its host's assembly, and reference
that. If you have many plugins, you have complexity whatever you do.
Duplicating source code just makes that worse, rather than better.

What on earth are you talking about? Put each plugin in its hosts assembly?
Did you mean interface? If you meant interface, that's a very strange way
to deploy and SDK. What if you're application/host requires licensing?
What if it has a complicated configuration? Serious deployment
dependencies? Et cetera. You're now forcing a plugin developer to install
each application component (a service for example) which has a plugin
capability on his development box. Simultaneously, you have then forced
each application component to have its own access to the base plugin
interface assembly, ergo, if you don't plan to force the plugin developer to
install absolutely every component on his/her box in order to simply build a
plugin, you have to include the binary as a dependency with each component
you offer as available for individual installation.

It's totally unwieldy in the case above, and an all or nothing approach. I
can see how in some cases that can be fine, and there are cases where I
wouldn't care about this, but there are lots of cases where it makes things
worse than if you didn't have to do it.
But code duplication is A-OK by you is it? Sorry, I really don't buy
it.

You mean multiple definitions of the interface? Do you not realize that
this happens no matter what method you choose? Are you worried about typos
or something? lol.
When somebody re-uses a class that someone else wrote, do you consider that
code duplication and you don't "buy it"?
The latter is *much* better IMO.

Well, in my experience in supporting the exact scenarios I have described,
it is far easier to develop, support, and extend via the former.
I'm afraid I see having to duplicate source code as a much bigger wart
than compiling against a fixed binary.

You seem to have latched onto this as your only reasoning as to why C#
cannot discern that two interfaces are identical. I'd hate to think of how
you share code with others, presumably you just send them an assembly and if
there's a problem with it, they have to ask you to fix it and ship them a
new assembly. Hopefully you're sitting around with nothing to do when this
happens. Here we can't do that. If there's a bug in an SDK client helper
class, the sdk developer can modify it because he has the file, not the
assembly. He doesn't have to wait for a new version of the SDK to come out
in 6 months to a year.

WTH
 
W

WTH

Reflection is just dandy, but it's no basis for a type system of a
statically typed language.

Why? Because C# sits on IL, and due to the nature of boxing and the manner
in which reference types are handled in C#, static typing is technically
correct but a bit of a misnomer in that ultimately the types are only
strongly typed after compilation (interfaces being the prime example -
they're identical interfaces until compiled into what I would call 'overly'
strongly typed types ;)...)
It's a "hey, if you can't do it that way, there's always reflection, I
mean, if you have to" solution. And for sure, you can solve the problem
you've got with reflection too. It would be horrible, but you could do it.

I know, that's why I would love to not have to do it ;).
Ah, I see what you're getting at now. You're not interested in unifying
the types, you want the runtime to transparently marshal between these
nominally different types, after recognizing that this is possible. This
is essentially what you achieve with COM by saying "yes, I implement this
interface with this UID too", but then you want it by structural
compatibility, not an agreed-upon UID.

That's what I think after thinking about this for a few hours, yes :).
"Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn" sounds less scary.

lol... I would agree. It has been a long time since I had to use DirectX 3
from C, unnnnghhghgh....
No, I'm just saying it can't be compared to the .NET approach because
nobody in the C++ world is required to play by any rules.

Not required by the language specification you mean. There's plenty of
other rules people have to abide by in order to write a compiler for a given
OS if they want it to be accepted.
If it happens to work, it happens to work, and if it doesn't happen to
work it's the programmer's task to clean up. .NET just gives you things
that will work, and it will protect you from the things that wouldn't
work. The designers will naturally be inclined to be conservative with
such an outlook, instead of giving you everything that could have been
useful.

Well, sure, they didn't *make* them incompatible (except indirectly), but
the critical difference with C++ is that it would have required actual
*work* (instead of just aligning things a bit and having it fall into
place by virtue of the object layout, like the C++ developers could do --
things are more flexible there).

The objects aren't "identical in memory". The *interfaces* are identical,
the objects implementing them presumably are not. The interfaces don't
have an in-memory representation, though, except by virtue of the vtable
layout (or the CLR equivalent, really). Unlike C++, you'll need to write
down when two types are and are not compatible, and the runtime will need
to check things before you ever make a call. And yes, they didn't actually
build a system that included marshalling by signature.

The interface is represented in memory and certainly should be identical
except by its metadeta. It isn't instanced. That makes me wonder if
instead of an interface I should define a class containing delegates
representing an what would normally be an interface, and then have the
plugin system match up the delegates. Sort of 'roll your own' interface
methods. I would much rather use interfaces, but if it avoids me deploying
more binaries, I'm probably all for it. I don't usually like to vary too
far from the accepted approach though, which is why I'm asking about this
stuff here... I'll just have to ultimately weigh all the factors and choose
to either do it the 100% normal C# way, a hybrid COM approach, or do it
myself... If it was just me using it and one product using it, I wouldn't
blink about the assembly based method, but it's not.
I'd say it's no so much avoiding the issue as it is thinking that the
system they came up with sufficed. You're right inasmuch that the words
"plugin system" probably didn't come up during the design phase, though,
or they would have made some accommodations (matching up types is just one
of the problems; unloading plugins is another one of those things).

I hope Micro$oft is in this one for the long run. I'm hoping that all
client side work will be in C# at our company going forward (except for the
fact that they've detached DirectX 10 from Managed code, I don't understand
that one...)

What I wouldn't give to be able to run on *nix with .NET w/o all the
restrictions found in MONO today...

Thanks for your points and suggestions by the way.

WTH:)
 
J

Jon Skeet [C# MVP]

WTH said:
? Do you mean it's defined twice?

Yes - the source code for the interface is present twice.
But that's the whole point.

That's duplication, which it's good practice to avoid.
Define it
where and when you need it, don't force people to have it defined all the
time like you would if they have to include an assembly to get one interface
and that assembly contains 30 other interface definitions. The code is
still there. If you simply put one interface per assembly you're only
including in the developers code once but then they have to have all the
assemblies which they 'may' use...

No, they only have to reference the assemblies they *do* use.
What on earth are you talking about? Put each plugin in its hosts assembly?
Did you mean interface?

Yes, I meant the interface.
If you meant interface, that's a very strange way
to deploy and SDK. What if you're application/host requires licensing?
What if it has a complicated configuration? Serious deployment
dependencies?

Who said anything about configuring it or installing it? You don't need
to do either of those things just to *reference* it.

Of course, if it becomes a problem for any reason, you can separate out
the interface into its own assembly - no problem.

You mean multiple definitions of the interface? Do you not realize that
this happens no matter what method you choose? Are you worried about typos
or something? lol.
When somebody re-uses a class that someone else wrote, do you consider that
code duplication and you don't "buy it"?

No, I don't consider that code duplication. The source code only lives
in one place. Changes to it only have to occur in one place, then get
deployed everywhere it's used. I consider that better than having the
source code everywhere that the interface would be used. I'm rather
glad I don't have to include the interface code for IDisposable,
Well, in my experience in supporting the exact scenarios I have described,
it is far easier to develop, support, and extend via the former.

Your experience which is limited in C#, by your own admission.
You seem to have latched onto this as your only reasoning as to why C#
cannot discern that two interfaces are identical.

Nope - it's just *one* good reason. It's a good enough one for me.
Various other reasons are also available, but I only need the one.
I'd hate to think of how
you share code with others, presumably you just send them an assembly and if
there's a problem with it, they have to ask you to fix it and ship them a
new assembly.

I suspect that most people who use my open source library only want or
need the DLL. Those who want to change the source can do so - but I'd
expect they'd usually still build to a DLL and reference that.
Assemblies are the units of code reference in .NET - not source code.
Hopefully you're sitting around with nothing to do when this
happens. Here we can't do that. If there's a bug in an SDK client helper
class, the sdk developer can modify it because he has the file, not the
assembly. He doesn't have to wait for a new version of the SDK to come out
in 6 months to a year.

No, but he then has to patch it every time there's a new release of the
SDK. Meanwhile, he may have introduced other bugs. The whole thing's a
versioning nightmare.

I don't expect to convince you, to be honest.
 
J

Jesse McGrew

What on earth are you talking about? Put each plugin in its hosts assembly?
Did you mean interface? If you meant interface, that's a very strange way
to deploy and SDK. What if you're application/host requires licensing?
What if it has a complicated configuration? Serious deployment
dependencies? Et cetera.

Er... how is the plugin developer going to *test* his plugin if he
doesn't have the host application installed? I would think that
licensing, configuration, and deployment are issues that you need to
solve anyway, if you want to actually write a plugin that works.

Jesse
 
W

WTH

Jon Skeet said:
Yes - the source code for the interface is present twice.


That's duplication, which it's good practice to avoid.

Ridiculous. How come you aren't arguing the very same point about having to
"add a reference to the appropriate assemblies to each and every project"?
That's duplication too. That is just as open to being forgotten or done
incorrectly, et cetera. You're creating straw man arguments. At least by
letting the developer define the interface you can accomplish things you
can't by forcing people to include them in assemblies as I've explained.
No, they only have to reference the assemblies they *do* use.

So now you're discussing the 'many assemblies' option. Ok, so instead of a
3rd party developer simply opening up the SDK documentation and adding the
interface definitions they need directly to their already existing code
base, you think it's wiser to have them obtain all the assemblies they need,
place them somewhere, and then add references for each one to their project?
I can assure you that our developers here would prefer simply adding the
interface themselves. Heck, unless each of the potentially many
assemblies was deployed on every single developer's box, you'd have to make
them available by network share (which has issues itself - not related to
assemblies but to code sharing.) There's all kinds of potentially messy
issues regarding versioning and code sharing amongst developers on a team as
well. If one developer hasn't gotten the very latest assembly being used by
a common utlity class, their code breaks. Now you can get around things
like this by enforcing a check-in process that is strict (and we do this
already), but the question isn't if it's possible to do this, the question
is why go through all this extra crap? There's no logical reason (that I've
come up with or heard) that explains why you can't marshal one interface
into another. It seems like it would be pretty easy to do, just make it an
attribute on your interface something like 'Can be marshalled'...
Yes, I meant the interface.


Who said anything about configuring it or installing it? You don't need
to do either of those things just to *reference* it.

You don't need to instance the application, but you can't just drop the
application or application components of a real product wherever you want.
You planning on writing/debugging/supporting another installer that involves
the host and/or sub-component necessary in order to reference? You going to
ensure that the new installer doesn't include dependencies? Updating an
interface now means someone must perform an installation/migration?

Listen, I can understand your reticence to find fault with something you've
obviously accepted as 'the way', but let me describe a real software suite
that has real complexity and real issues that you seem to be ignoring.

Imagine a security product that has a thick display client. The product has
a middleware layer. The product has a centralized storage back end. The
product has sensors and devices (logical software constructs that represent
real sensors and devices.) The product has a large number of potential 3rd
party integration points. There's a team of 12 people working on it broken
up into 3 primary groups - Client, Engine, Sensors. There are teams of
developers in several places inside and outside the US that write code to
these integration points (some of which are TCP/IP based, some HTTP based,
some are plugins.) This product is released approximately every 6 months
for a minor revision and every 18 months for a major revision. There are
many large corporations, military installations, and research groups using
this product. The client consists of ~13 projects that aggregate into a
small set of applications. The middleware consists of ~47 projects that
aggregate into sets of services and aggregatable components. The
centralized storage system has ~3 projects that represent major components
plus a primary project. The sensors system has ~27 in house projects for
components, 5 of which represent services or major components. The client
system has 3 integration points which support plugins. The middleware has
dozens of integration points that are plugins, tied to services and the
occasion host application, there are probably only 3 different interfaces
used across the middle ware components but the middleware can be deployed in
many different fashions depending upon what parts of the product you pay
for. The backend and sensors have 2 and 1 plugin integration points
respectively. The system is designed so that each aspect,
client/middleware/sensors (but not storage) can migrate independently.

Now, given a rough scope of this 'product suite', can you understand how I
find it vastly simpler and less complex that if a developer wants to write a
plugin for our system they simply define the interface rather than the
rather complicated scenario of managing all the possible scenarios for
deploying components, hosts, assemblies, organizing the assemblies,
versioning them, et cetera...
Of course, if it becomes a problem for any reason, you can separate out
the interface into its own assembly - no problem.



No, I don't consider that code duplication.

I'm sure you don't but it is the same thing.
The source code only lives
in one place

You're making assumptions about how it would be used. In a software company
people tend to check out source from a repository and they have a local
copy. It is archived in one place, but it's duplicated across the network.
If you find that to be splitting hairs, I remind you that the way I would
like to do interfaces is exactly the same. You just seem to consider 'that'
code duplication.
. Changes to it only have to occur in one place, then get
deployed everywhere it's used. I consider that better than having the
source code everywhere that the interface would be used. I'm rather
glad I don't have to include the interface code for IDisposable,


Your experience which is limited in C#, by your own admission.

My experience certainly covers everything that we've been talking about,
lol.
Nope - it's just *one* good reason. It's a good enough one for me.
Various other reasons are also available, but I only need the one.

How many people work on the same product where you are? You sound like you
don't have to worry about too much complexity (that's not a slam, that's a
"I wish that were me" :))
I suspect that most people who use my open source library only want or
need the DLL. Those who want to change the source can do so - but I'd
expect they'd usually still build to a DLL and reference that.
Assemblies are the units of code reference in .NET - not source code.

I would suggest that it is the way for relatively small projects. There are
other aspects of C# and VS 2005 (haven't tried 2008 yet) that indicate a
small team focus (i.e. adding a file to your project copies it by default
instead of linking it...)
No, but he then has to patch it every time there's a new release of the
SDK. Meanwhile, he may have introduced other bugs. The whole thing's a
versioning nightmare.

Patch what? He can simply grab the updated class or interface. It's not
all globbed together in an assembly.
I don't expect to convince you, to be honest.

I do appreciate the discussion though :). I hear .NET 3.5 will make changes
to the way in which plugin systems can work.

WTH
 
W

WTH

No need. Just put each plugin in its host's assembly, and reference
Er... how is the plugin developer going to *test* his plugin if he
doesn't have the host application installed? I would think that
licensing, configuration, and deployment are issues that you need to
solve anyway, if you want to actually write a plugin that works.

Generally speaking you have a test harness. You write one to help offload
work from the always understaffed and overwrought QA team to developers who
can use it to run regression tests. Now, you don't need a test harness, I'm
just prone to making QA happy, and most enterprise products are distributed.
For example, I wouldn't install the middleware layer on my development box
so I could build a plugin for it. I'd build a plugin for it, and install it
on one of the development installations (we have several and QA has dozens.)
Personally, I do the test harness method when I can.

WTH
 
J

Jon Skeet [C# MVP]

WTH said:
Ridiculous. How come you aren't arguing the very same point about having to
"add a reference to the appropriate assemblies to each and every project"?

Well, where possible they should refer to either a prebuilt version, or
the latest version built from source control, which other projects can
also reference. You still have a single point of truth, within the
confines of branches. You don't have each plugin project deciding what
its idea of the interface is.
That's duplication too. That is just as open to being forgotten or done
incorrectly, et cetera. You're creating straw man arguments. At least by
letting the developer define the interface you can accomplish things you
can't by forcing people to include them in assemblies as I've explained.

I believe your solution introduces extra complexity, complexity which
is worth avoiding.
So now you're discussing the 'many assemblies' option.

Um, yes, because that's the one you were writing about. When you start
by saying "If you simply put one interface per assembly" then that's
what I'll address. It's not like I've changed the topic.
Ok, so instead of a
3rd party developer simply opening up the SDK documentation and adding the
interface definitions they need directly to their already existing code
base, you think it's wiser to have them obtain all the assemblies they need,
place them somewhere, and then add references for each one to their project?

For each one they need, yes.
I can assure you that our developers here would prefer simply adding the
interface themselves.

Maybe that's just because that's what they're used to - it doesn't mean
it's a good preference.
Heck, unless each of the potentially many
assemblies was deployed on every single developer's box, you'd have to make
them available by network share (which has issues itself - not related to
assemblies but to code sharing.)

So sharing via cut and paste is okay, but sharing by copying files has
issues? Doesn't sound like a terribly productive environment.
There's all kinds of potentially messy
issues regarding versioning and code sharing amongst developers on a team as
well.

Which are avoided by each developer deciding to have their own
potentially different version of an interface? That sounds like it's
*introducing* messy issues rather than avoiding them.
If one developer hasn't gotten the very latest assembly being used by
a common utlity class, their code breaks.

If one developer hasn't gotten the very latest copy of the SDK
interface, their code would break, wouldn't it?
Now you can get around things
like this by enforcing a check-in process that is strict (and we do this
already), but the question isn't if it's possible to do this, the question
is why go through all this extra crap? There's no logical reason (that I've
come up with or heard) that explains why you can't marshal one interface
into another. It seems like it would be pretty easy to do, just make it an
attribute on your interface something like 'Can be marshalled'...

I just don't see the benefit, I really don't. It sounds like it's
encouraging what I still regard as bad practice.

You don't need to instance the application, but you can't just drop the
application or application components of a real product wherever you want.

You can reference assemblies without installing them, if you want to
build against them but not actually run the code.

But hey, if you don't want to use that technique, don't. There are
numerous ways to go:

o All plugin interfaces in a single assembly
o Plugin interfaces in the same assemblies as their hosts
o One assembly per plugin interface
o A hybrid, e.g. 5 assemblies with 4 interfaces each for 20 plugin
interfaces - group appropriately.
You planning on writing/debugging/supporting another installer that involves
the host and/or sub-component necessary in order to reference? You going to
ensure that the new installer doesn't include dependencies? Updating an
interface now means someone must perform an installation/migration?

If you update the interface, that means code changes however you're
referencing the code. With appropriate versioning, using assembly
referencing is more robust than just deciding that an interface may be
suitable to be marshalled. If the appropriate versions aren't present,
there'll be a clear type load error, rather than things going wrong in
a possibly mysterious and hard to diagnose way.
Listen, I can understand your reticence to find fault with something you've
obviously accepted as 'the way'

Try reading the same paragraph back to yourself. I've used both ways in
the past, and found duplicating source code a far inferior solution to
referring to built binaries.
but let me describe a real software suite
that has real complexity and real issues that you seem to be ignoring.

I believe you're ignoring the complexity and issues which are involved
in *your* way of doing them. Either this is because you've learned to
work your way round them, or they happen not to have bitten you - but
that doesn't mean they aren't there.
Imagine a security product that has a thick display client. The product has
a middleware layer. The product has a centralized storage back end. The
product has sensors and devices (logical software constructs that represent
real sensors and devices.) The product has a large number of potential 3rd
party integration points. There's a team of 12 people working on it broken
up into 3 primary groups - Client, Engine, Sensors. There are teams of
developers in several places inside and outside the US that write code to
these integration points (some of which are TCP/IP based, some HTTP based,
some are plugins.) This product is released approximately every 6 months
for a minor revision and every 18 months for a major revision. There are
many large corporations, military installations, and research groups using
this product. The client consists of ~13 projects that aggregate into a
small set of applications. The middleware consists of ~47 projects that
aggregate into sets of services and aggregatable components. The
centralized storage system has ~3 projects that represent major components
plus a primary project. The sensors system has ~27 in house projects for
components, 5 of which represent services or major components. The client
system has 3 integration points which support plugins. The middleware has
dozens of integration points that are plugins, tied to services and the
occasion host application, there are probably only 3 different interfaces
used across the middle ware components but the middleware can be deployed in
many different fashions depending upon what parts of the product you pay
for. The backend and sensors have 2 and 1 plugin integration points
respectively. The system is designed so that each aspect,
client/middleware/sensors (but not storage) can migrate independently.

And you want each plugin to have its own possibly changed and possibly
out of date copy of its plugin's interface? That sounds like madness to
me.
Now, given a rough scope of this 'product suite', can you understand how I
find it vastly simpler and less complex that if a developer wants to write a
plugin for our system they simply define the interface rather than the
rather complicated scenario of managing all the possible scenarios for
deploying components, hosts, assemblies, organizing the assemblies,
versioning them, et cetera...

No, I don't understand that - because there's more room for the
developers to get things out of sync, or foul them up themselves.

If you want to get as close to a single interface in a single source
file, just use a single interface for a single assembly. Yes, you get
quite a few small assemblies - but there's no significant harm in that,
and you get the benefit of knowing exactly which version of the plugin
interface you're using. If it's compiled into the plugin itself, it
gets a lot more complicated in my view.
I'm sure you don't but it is the same thing.

Sounds like we'll have to agree to differ.
You're making assumptions about how it would be used. In a software company
people tend to check out source from a repository and they have a local
copy. It is archived in one place, but it's duplicated across the network.
If you find that to be splitting hairs, I remind you that the way I would
like to do interfaces is exactly the same. You just seem to consider 'that'
code duplication.

That's more than splitting hairs, it's coming up with a straw man
scenario. There's still a single point of truth - the source control
system. (Or rather, there's a single point of truth per branch.) As far
as each developer is concerned, the working copies of other developers
don't exist until the code is checked in. That's very different from
each project having a checked in copy of code coming from elsewhere.
My experience certainly covers everything that we've been talking about,
lol.

Sounds to me like you're just so used to one way of working that you've
been blindsided to its downsides, and you're reluctant to change.
How many people work on the same product where you are? You sound like you
don't have to worry about too much complexity (that's not a slam, that's a
"I wish that were me" :))

Considering the relatively small size of the company, there's more
complexity than I'd like - managing several brances with various
dependent projects. Not the sort of scale you mention above, but enough
to know that it would be worse if we had independent copies of bits of
code floating around.
I would suggest that it is the way for relatively small projects. There are
other aspects of C# and VS 2005 (haven't tried 2008 yet) that indicate a
small team focus (i.e. adding a file to your project copies it by default
instead of linking it...)

That's not a small team focus - that's a focus of "code should only
live in a single project".

For some file types, it's appropriate to share - keys for signing
assemblies spring to mind. For actual code, I'd rather make sure I only
have a single copy anywhere *and* that it's only built into a single
project.
Patch what? He can simply grab the updated class or interface. It's not
all globbed together in an assembly.

The developer has made a local modification to his copy of the file.
When a new release comes out, he has to see whether that local
modification is still appropriate, and merge it in with the new
version.
I do appreciate the discussion though :). I hear .NET 3.5 will make changes
to the way in which plugin systems can work.

Well, it has a plugin infrastructure - but I very, very seriously doubt
that it will change this aspect of it. Good job too, IMO.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top