Z
Zach
I was reading about partial methods in the upcoming C# 3.0 with a
friends and we were having a discussion about their usefulness.
One on hand, they seem to be ok at providing a solution for allowing a
class to be extended at compile time by users of the class, and as
pointed out in some articles and papers they are good at acting as
lightweight event handlers.
On the other hand, they are in my eyes exactly almost a strict subset
of the delegate/event model already provided by C# and the .NET
framework.
The idea is that you can do something like this:
partial class Foo
{
partial void SomeEventHandler(int i);
int VeryExpensiveMethod() { ... }
public void Method()
{
SomeEventHandler(VeryExpensiveMethod());
}
}
Now, this code would exist in a library somewhere and a user of the
library would simply type the following code:
partial class Foo
{
partial void SomeEventHandler(int i) {/* Do something with i */}
}
If that code is not present, then Foo.Method() would essentially
compile down to an completely empty method. The advantage over using
the standard Strategy design pattern (e.g. virtual methods) is that
VeryExpensiveMethod() will be compiled out since all this information
is known at compile time.
But the advantage over using the delegate/event model is minimal, and
almost non-existant. Consider if the class had been written like
this:
class Foo
{
public void delegate SomeEventHandler(int i);
public event SomeEventHandler OnSomeEvent;
int VeryExpensiveMethod() { ... }
public void Method()
{
if (OnSomeEvent != null)
OnSomeEvent(VeryExpensiveMethod());
}
}
This is almost equivalent, the only obvious difference being that in
the case of the delegate/event, OnSomeEvent will always be checked for
null, and the method will never compile down to an empty method.
But... Have they really designed an entire language feature and
keyword usage around one micro-optimization? The latter is actually
more maintainable and clearer IMO, and definitely more flexible. The
only price is that you have to suffer a single null pointer
comparison, as well as any overhead incurred by the Delegate or
MulticastDelegate classes for the actual dispatch.
Am I missing something here? Is there some elegant design pattern
that this can be used with? Partial -classes- actually solved a real
problem, in particular that generated classes could not be customized
in such a way that re-generating the code would not interfere with the
customizations.
But partial methods seem to solve nothing unless you're writing
extremely performance intensive code, in which case I would argue that
maybe you shouldn't be using C# in the first place.
Thoughts?
friends and we were having a discussion about their usefulness.
One on hand, they seem to be ok at providing a solution for allowing a
class to be extended at compile time by users of the class, and as
pointed out in some articles and papers they are good at acting as
lightweight event handlers.
On the other hand, they are in my eyes exactly almost a strict subset
of the delegate/event model already provided by C# and the .NET
framework.
The idea is that you can do something like this:
partial class Foo
{
partial void SomeEventHandler(int i);
int VeryExpensiveMethod() { ... }
public void Method()
{
SomeEventHandler(VeryExpensiveMethod());
}
}
Now, this code would exist in a library somewhere and a user of the
library would simply type the following code:
partial class Foo
{
partial void SomeEventHandler(int i) {/* Do something with i */}
}
If that code is not present, then Foo.Method() would essentially
compile down to an completely empty method. The advantage over using
the standard Strategy design pattern (e.g. virtual methods) is that
VeryExpensiveMethod() will be compiled out since all this information
is known at compile time.
But the advantage over using the delegate/event model is minimal, and
almost non-existant. Consider if the class had been written like
this:
class Foo
{
public void delegate SomeEventHandler(int i);
public event SomeEventHandler OnSomeEvent;
int VeryExpensiveMethod() { ... }
public void Method()
{
if (OnSomeEvent != null)
OnSomeEvent(VeryExpensiveMethod());
}
}
This is almost equivalent, the only obvious difference being that in
the case of the delegate/event, OnSomeEvent will always be checked for
null, and the method will never compile down to an empty method.
But... Have they really designed an entire language feature and
keyword usage around one micro-optimization? The latter is actually
more maintainable and clearer IMO, and definitely more flexible. The
only price is that you have to suffer a single null pointer
comparison, as well as any overhead incurred by the Delegate or
MulticastDelegate classes for the actual dispatch.
Am I missing something here? Is there some elegant design pattern
that this can be used with? Partial -classes- actually solved a real
problem, in particular that generated classes could not be customized
in such a way that re-generating the code would not interfere with the
customizations.
But partial methods seem to solve nothing unless you're writing
extremely performance intensive code, in which case I would argue that
maybe you shouldn't be using C# in the first place.
Thoughts?