changing access modifier of base method

J

jonpb

Hi,
I have implemented an InputBox dialogue, I would like to change the
access modifier of the Form::Show method from public to private, but the
following code does not do that. What else do I need to do? Thanks.


public partial class InputBox : Form
{
private new void Show()
{
}

private new void Show(IWin32Window owner)
{
}
}
 
J

Jon Skeet [C# MVP]

jonpb said:
I have implemented an InputBox dialogue, I would like to change the
access modifier of the Form::Show method from public to private, but the
following code does not do that. What else do I need to do? Thanks.

There's nothing you can do. The way that inheritance works, a caller
should be able to treat an instance of your class as an instance of
Form - including calling the Show method. If you don't want to expose a
Show method, you shouldn't derive from Form.
 
N

Nicholas Paldino [.NET/C# MVP]

You might be able to get away with hiding it (using the "new" keyword)
but that won't prevent someone from casting your instance to a Form and then
calling the Show method on that.

There is no really good way to hide the method other than encapsulation
(have an instance of the Form class as a member of your class, and only
expose the members that you want).
 
F

Fredo

You can't change the scope of an existing method.

There's really no way to do it at all with a control-based class. With most
classes, you can create a proxy and omit the methods you wish to hide, but a
Control derived class, you'll lose vital functionality by not inheriting
from something in the Control hierarchy.

Why are you trying to hide the methods? Maybe there's another way to handle
your problem.
 
J

jonpb

Fredo said:
Why are you trying to hide the methods? Maybe there's another way to handle
your problem.
The object is dialogue, so ShowDialog should be used at all times, it
makes no sense to call Show. On top of that I overloaded ShowDialog to
initialize the title and prompt in the InputBox, so, in fact I would
like to hide the base ShowDialog as well.

Once again, I disagree with the hard-line definitions of OOP. I see no
philosophical reason to disallow changing access modifiers on base
methods, especially in this case where the base is a general purpose
class and the derived is a specialization of the base. But I guess
that's just me. The argument that there is nothing stopping the user
from casting to Form and calling Show is, in this case, invalid, kind
like saying, in C++ that you can change the value of a reference, sure
you can, but who in their right mind ever would.

Anyways, as for this class, the purpose of hiding the methods is for
completeness, no one will ever use them and it's kind of an annoyance to
see them in intellisense.
 
F

Fredo

jonpb said:
The object is dialogue, so ShowDialog should be used at all times, it
makes no sense to call Show. On top of that I overloaded ShowDialog to
initialize the title and prompt in the InputBox, so, in fact I would like
to hide the base ShowDialog as well.

Once again, I disagree with the hard-line definitions of OOP. I see no
philosophical reason to disallow changing access modifiers on base
methods, especially in this case where the base is a general purpose class
and the derived is a specialization of the base. But I guess that's just
me. The argument that there is nothing stopping the user from casting to
Form and calling Show is, in this case, invalid, kind like saying, in C++
that you can change the value of a reference, sure you can, but who in
their right mind ever would.

I don't know that I disagree with you, but then I'm the kind of person who
won't hesitate to use reflection to call stuff "I shouldn't" if it gets me
where I need to go.

But there's just no way to do what you want and even if there was, there's
always reflection...

Pete
 
S

Scott Roberts

jonpb said:
The object is dialogue, so ShowDialog should be used at all times, it
makes no sense to call Show. On top of that I overloaded ShowDialog to
initialize the title and prompt in the InputBox, so, in fact I would like
to hide the base ShowDialog as well.

Once again, I disagree with the hard-line definitions of OOP. I see no
philosophical reason to disallow changing access modifiers on base
methods, especially in this case where the base is a general purpose class
and the derived is a specialization of the base. But I guess that's just
me. The argument that there is nothing stopping the user from casting to
Form and calling Show is, in this case, invalid, kind like saying, in C++
that you can change the value of a reference, sure you can, but who in
their right mind ever would.

Anyways, as for this class, the purpose of hiding the methods is for
completeness, no one will ever use them and it's kind of an annoyance to
see them in intellisense.

I disagree with the notion that a derived class need not be a super-set of
the base class functionality. A derived class *must* implement the
functionality of the base class, and *may* provide additional functionality
as well. This is simply a definition - call it "hard-line" if you want.

That said, there are a couple of things you can do (short of creating a
wrapper class, which is technically what you should do) to *try* to prevent
other developers from "misusing" your class.

A) Throw an exception inside any methods which should not be used. Assuming
the developer at least runs the code one time, s/he will find the error.

B) Use the "obsolete" attribute to mark the method as, well, obsolete.
Adding a "true" parameter will cause the compiler to emit an error if/when
the method is used. (This will not work if, say, someone casts a reference
to Form and calls the method, but "why would anyone do that", right?)

C) Add XML comments to the methods to let the developer(s) know (in
intellisense) that the method should not be used.
 
P

Peter Duniho

The object is dialogue, so ShowDialog should be used at all times, it
makes no sense to call Show. On top of that I overloaded ShowDialog to
initialize the title and prompt in the InputBox, so, in fact I would
like to hide the base ShowDialog as well.

The suggestion for using composition is likely to be a reasonable one
here. It's true that for Control-derived classes this often is
impractical, but for a dialog form class, there's so little likelihood to
need for the code that creates your instance to treat the instance as
anything but the class you've defined, that you may not run into any
problems at all with composition. And in fact, given the apparent express
desire on your part that the instance _never_ be treated as anything other
than your defined class, this may in fact be the _most_ desirable approach.

If you really want to inherit a Form class rather than compositing it, you
could override the OnShown() method and do your initialization there,
based on some properties you expose in your derived class. You could even
allow those properties to affect the state of the form once it's been
shown (depending on how you implement the properties, it could even be
easier to allow that than to not allow it).

At the same time in the OnShown() method, you can check to see if the form
was shown modally, and throw an exception if it was not.

Those techniques address both issues without requiring you to be able to
hide the original Show() or ShowDialog() methods.
Once again, I disagree with the hard-line definitions of OOP. I see no
philosophical reason to disallow changing access modifiers on base
methods,

It's been explained to you. It's unfortunate that you disagree with the
explanation, but it's a valid, correct explanation nonetheless.
especially in this case where the base is a general purpose class and
the derived is a specialization of the base.

Actaully, the fact that the base _is_ a general purpose class is what
makes it so important for the original method to not be able to be changed.
But I guess that's just me. The argument that there is nothing stopping
the user from casting to Form and calling Show is, in this case,
invalid, kind like saying, in C++ that you can change the value of a
reference, sure you can, but who in their right mind ever would.

I don't understand the comparison to the C++ situation of changing the
value of a reference. It's nothing like this situation. It's not clear
you understand this, but keep in mind that a reference in C# is not the
same as a reference in C++. It's more like a pointer in C++.

The basic issue, as was explained before, is that once you've created an
instance of your class that inherits Form, you can pass the reference to
that instance to any other code that accepts a type compatible with your
class. In particular, you can pass your instance to anything that accepts
a Form, and that's perfectly legal.

More importantly, it's not just legal, it's extremely common in an OOP
environment and is in fact a key feature around which OOP is built. One
of the key elements of OOP is being able to not worry about the actual
type of the instance of an object, but rather to treat it as whatever
minimally-derived type is appropriate in the given context. It should not
be necessary, and is often impossible, for code using an instance via a
less-derived type to know anything about the more-derived types of the
class.

So, say you create your class and then pass it to something that only
knows about the base class. Say then that code calls the base class
method that you tried to hide or overload. Now all that hard work you
went to in order to try to hide the methods was for naught. Because your
class _is_ still a Form, anything that treats it as a Form will always
have access to the things in the Form class, no matter what you do. Not
only is this a useful thing, a lot of what makes OOP work just wouldn't
work right without it. It would break the whole idea of abstraction if
you could hide methods inherited in derived classes from code that knows
only about the base classes.

Now, for what it's worth, there are OOP environments in which you have a
little more control. You still can't hide a method, but you can ensure
that your code always gets called, because the language has _only_ virtual
methods. For example, Java. This way, you could implement your own
ShowDialog() overload, and throw exceptions from the Show() and original
ShowDialog() methods. You still can't stop someone from writing code that
calls those methods, but you can make it impossible for them to use your
class when they do so.
Anyways, as for this class, the purpose of hiding the methods is for
completeness, no one will ever use them and it's kind of an annoyance to
see them in intellisense.

Well, the compositing approach will solve the Intellisense issue. I'm not
sure, but I suppose it's possible there's a code attribute you could apply
that will hide the methods in Intellisense as well (but of course that
would only have an effect when you're dealing with an expression that is
your derived type).

Frankly, I think that concerns about what is shown in Intellisense is one
of the worst possible reasons to complain about some specific aspect of
OOP design. There are quite a lot of subtle disagreements regarding the
"best" or "appropriate" way to do things in OOP, but I don't think any
serious disagreement could be based on something so trivial. If you don't
like the way Intellisense presents the information, then the answer to
that is to argue in favor of trying to change the way that Intellisense
works, rather than complaining about fundamental aspects of the language
design.

Pete
 
J

jonpb

Peter said:
It's been explained to you. It's unfortunate that you disagree with the
explanation, but it's a valid, correct explanation nonetheless.

Look, I know all about the Listkov principle, and I don't disagree with
anything you say in your long justification for it, but you're arguments
talk about the general. There are indeed, specific scenarios, albeit
rare, in everyday programming where a derived class represents a
specialization of a base class in which some base class functions don't
make sense to expose. C++, and I think Java, allow you to override the
access modifiers of inherited methods and narrow the access. As far as
casting to Form goes, this is a freakin' InputBox, man, noone's going to
be casting it to Form and passing it around. When you need input from
the user you instantiate an instance and then dispose of it, that simple.

I didn't write the Form class, microsoft did, how many times have you
written a form that acts both like a modal and a modeless interface? I
don't think I ever have, so why does Form have both Show and ShowDialog?
Because it's a general purpose class.
I don't understand the comparison to the C++ situation of changing the
value of a reference. It's nothing like this situation. It's not clear
you understand this, but keep in mind that a reference in C# is not the
same as a reference in C++. It's more like a pointer in C++.
Come on, the only point I was making was that, just because you can do
something, doesn't mean you should do it.
Frankly, I think that concerns about what is shown in Intellisense is
one of the worst possible reasons to complain about some specific aspect
of OOP design.

Jees, you really like to dumb people down don't you.


And, btw, there's no way in hell I'm going to use composition just so I
can hide the "obsolete" methods and stay within the draconian confines
of one particular definition of OOP. The suggestion makes perfect
theoretical sense, but is so ridiculously impractical I wouldn't even
entertain the idea.
 
P

Peter Duniho

Look, I know all about the Listkov principle, and I don't disagree with
anything you say in your long justification for it, but you're arguments
talk about the general.

As they must. The design of an OOP language must be general, since you
don't know when you're designing the language what it will be used for.

You can't design a language around one specific rare situation.
There are indeed, specific scenarios, albeit rare, in everyday
programming where a derived class represents a specialization of a base
class in which some base class functions don't make sense to expose.

Well, I simply disagree. By definition, a derived class includes all of
the base class's behavior. Where a class has allowed you to _override_
the behavior, you have a bit of wiggle room, but otherwise if you don't
want the base class's behavior, don't inherit it.
C++, and I think Java, allow you to override the access modifiers of
inherited methods and narrow the access.

Whether they do or not isn't relevant. The point is that even if they do,
there's nothing to stop any code from using a reference to the derived
class as if it were a reference to the base class. At that point, the
base class access modifiers apply.

You can't do what you're asking for even in C++ or Java (assuming access
modifiers can be overridden...C++ is too long ago for me to remember, and
Java's too new to me for me to know).
As far as casting to Form goes, this is a freakin' InputBox, man,
noone's going to be casting it to Form and passing it around. When you
need input from the user you instantiate an instance and then dispose of
it, that simple.

You're right. The usage of the instance _is_ simple. So just composite
it and be done with it.

Why should every other more complex scenario have to wind up broken just
to support a very easy simple scenario?
I didn't write the Form class, microsoft did, how many times have you
written a form that acts both like a modal and a modeless interface? I
don't think I ever have, so why does Form have both Show and ShowDialog?
Because it's a general purpose class.

So much of the functionality is the same, it would not have made sense to
make it two completely different classes.

Now, they might have made a "ModalForm" class, for example, that inherits
the Form class (I don't think the other way around would have made
sense). But the ModalForm class would still wind up with a Show()
method. So I don't see what your objection to the single class being
allowed both possible uses is. How does this relate to your original
objection at all?
Come on, the only point I was making was that, just because you can do
something, doesn't mean you should do it.

The statement "just because you can do something, doesn't mean you should
do it" is true. But how does it apply here? We're not talking about
something that exists solely "because you can do it". It exists for a
very real, important feature of an OOP language.

So, again...I don't see what changing a C++ reference has to do with this
issue.
Jees, you really like to dumb people down don't you.

I have no idea what you mean. I don't intentionally make people look
dumb, if that's what you mean. If it happens, it's because they did it
themselves.
And, btw, there's no way in hell I'm going to use composition just so I
can hide the "obsolete" methods and stay within the draconian confines
of one particular definition of OOP. The suggestion makes perfect
theoretical sense, but is so ridiculously impractical I wouldn't even
entertain the idea.

Ah, yes. Stubbornness. A wonderful way to design software. Good luck
with that.

Pete
 
J

Jon Skeet [C# MVP]

Now, for what it's worth, there are OOP environments in which you have a
little more control. You still can't hide a method, but you can ensure
that your code always gets called, because the language has _only_ virtual
methods. For example, Java.

No, Java has non-virtual methods too - you use the "final" modifier to
seal a method. It's just that unfortunately Java's methods are virtual
by default.

(I think C# classes ought to be sealed by default as well, but that's a
whole other discussion...)
 
J

Jon Skeet [C# MVP]

jonpb said:
Look, I know all about the Listkov principle, and I don't disagree with
anything you say in your long justification for it, but you're arguments
talk about the general. There are indeed, specific scenarios, albeit
rare, in everyday programming where a derived class represents a
specialization of a base class in which some base class functions don't
make sense to expose. C++, and I think Java, allow you to override the
access modifiers of inherited methods and narrow the access.

You certainly can't in Java. Java allows you to *widen* the access, but
not narrow it.

You have to ask yourself: what do you expect to happen if someone
writes:

Form f = new YourForm();
f.Show();

? You can't possibly stop that from being valid code while YourForm
derives from Form.
As far as
casting to Form goes, this is a freakin' InputBox, man, noone's going to
be casting it to Form and passing it around. When you need input from
the user you instantiate an instance and then dispose of it, that simple.

You don't have to cast it in order to use it as a Form - see the above
code, which doesn't have a cast in it.
And, btw, there's no way in hell I'm going to use composition just so I
can hide the "obsolete" methods and stay within the draconian confines
of one particular definition of OOP. The suggestion makes perfect
theoretical sense, but is so ridiculously impractical I wouldn't even
entertain the idea.

Why not? If all you expect people to do (as you've said) is create an
instance and then dispose of it, that's very easy to encapsulate
separately.

Look, if you want to go against well known principles and best
practices that's up to you - but you shouldn't expect the language to
help you do it.
 
P

Peter Duniho

No, Java has non-virtual methods too - you use the "final" modifier to
seal a method. It's just that unfortunately Java's methods are virtual
by default.

Ah, right.

I keep getting tripped up by "final" in Java. Maybe it's just me, but the
exact meaning of the keyword seems to vary quite a bit according to
context. It seems to be usable in places where I don't expect the thing
to even possibly be virtual, so I forget that it's used to "un-virtual"
something. :)

Anyway, sorry for the error. It is true that by default, methods are
virtual and so are overridable. I've seen very few instances of "final"
methods, though no doubt they exist in select places.

Pete
 
J

Jon Skeet [C# MVP]

Ah, right.

I keep getting tripped up by "final" in Java. Maybe it's just me, but the
exact meaning of the keyword seems to vary quite a bit according to
context. It seems to be usable in places where I don't expect the thing
to even possibly be virtual, so I forget that it's used to "un-virtual"
something. :)

It's got 3 meanings that I can think of, two of which are very
similar:

1) Equivalent to readonly for variables, including local variables
2) Equivalent to sealed on a class
3) Equivalent to sealed on a method
Anyway, sorry for the error. It is true that by default, methods are
virtual and so are overridable. I've seen very few instances of "final"
methods, though no doubt they exist in select places.

Yes - it shows the power of defaults. Many developers - including
myself - will often use the defaults even if we'd make a different
conscious decision. Indeed, it would be interesting to see a language
where you *had* to make a decision for these things - with every
method having to have sealed/virtual, every variable having to have
readonly/writable, every class having to have sealed/virtual etc,
every member having to have an explicit access modifier.

It would be a pain to write in such a language, but probably
instructive.

Jon
 
P

Peter Duniho

It's got 3 meanings that I can think of, two of which are very
similar:

1) Equivalent to readonly for variables, including local variables
2) Equivalent to sealed on a class
3) Equivalent to sealed on a method

I have seen it used as an equivalent to "const". That is, according to at
least some Java articles I've read, variables declared with an constant
assignment don't even wind up allocating space. The compiler just
hard-codes the value in the code, like a const declaration in other
languages. I'm not sure if this counts as a different meaning than your
#1...I'm not even sure if it's even an actual meaning (I've found varying
quality in the Java resources I've seen :) ).

I saw a page on the Java web site that had a long list of the different
contexts in which "final" could be used. It's possible (probable?) that
the number of different semantic meanings was much less, given the overlap
between inheritance behaviors in different members. But I think there
might be more than 3 different meanings. :)
Yes - it shows the power of defaults. Many developers - including
myself - will often use the defaults even if we'd make a different
conscious decision. Indeed, it would be interesting to see a language
where you *had* to make a decision for these things - with every
method having to have sealed/virtual, every variable having to have
readonly/writable, every class having to have sealed/virtual etc,
every member having to have an explicit access modifier.

It would be a pain to write in such a language, but probably
instructive.

Well, even though I know it doesn't really matter from a performance
perspective, I still prefer non-virtual methods over virtual because of
the slight overhead of virtual methods. It's hard to know for sure until
you're in that position, but I believe that I'd opt for making everything
sealed by default, and only virtual where I specifically want it.

In addition to the performance issue, I also think there's a legitimate
design reason for making sealed the default. Most of my classes contain
many more private members than public. Lots of little helper functions,
for example, that will never be overridden. Sure, I could explicitly make
those all "final" in Java, but what a pain that would be.

Likewise, even for most of the public methods, I don't intend for the
methods to be overridden. I suppose there, there's less reason to be
picky about that, but I think it can still be dangerous. IMHO, allowing a
member to be overridden requires some forethought as to how that member
might be overridden.

So to some extent, on top of my performance bias, for me it comes down to
the frequency of one or the other. Since my virtual members are almost
always outnumbered by a large proportion by the sealed members, making
virtual the default doesn't make sense. Either I wind up with stuff
that's virtual and really shouldn't be (which is what's happening to me in
Java right now), or I waste a lot of time explicitly making things sealed.

Even in my so-far-brief foray into Java, I can see some value in making
everything virtual by default. It leads to very flexible inheritance. On
the other hand, it leads to very flexible and sometimes unpredictable
inheritance too. :) I'm not sure that encouraging developers to make
everything virtual is really the way to go. It might be fine for people
who live and breathe OOP and love the infinite possibilities. But for
someone like me who basically uses OOP, or any language for that matter,
just as a means to some other end (like getting my program to work :) ),
it just seems to get in the way and make things riskier.

Pete
 
J

Jon Skeet [C# MVP]

I have seen it used as an equivalent to "const". That is, according to at
least some Java articles I've read, variables declared with an constant
assignment don't even wind up allocating space. The compiler just
hard-codes the value in the code, like a const declaration in other
languages. I'm not sure if this counts as a different meaning than your
#1...I'm not even sure if it's even an actual meaning (I've found varying
quality in the Java resources I've seen :) ).

That's a specialist meaning of #1 really. And the space is still there
at runtime, but any references to it are hardcoded. So if I have:

public class Foo
{
public static final int SOME_SHOUTY_CONSTANT = 1;
}

and then:

public class Bar
{
void X()
{
int x = Foo.SOME_SHOUTY_CONSTANT;
}
}

then that's exactly equivalent (in Bar.X) to
int x = 1;

The Foo class still has SOME_SHOUTY_CONSTANT available at execution
time, but references to it are resolved at compile-time. This is
exactly the same as "const" in C# - with the attendant issues, too.
I saw a page on the Java web site that had a long list of the different
contexts in which "final" could be used. It's possible (probable?) that
the number of different semantic meanings was much less, given the overlap
between inheritance behaviors in different members. But I think there
might be more than 3 different meanings. :)

Hmm... I *suspect* all of those extra meanings fit into my main 3,
just in different ways - but I'd have to see the list to check.

Well, even though I know it doesn't really matter from a performance
perspective, I still prefer non-virtual methods over virtual because of
the slight overhead of virtual methods.

It definitely matters from a performance perspective on .NET. It
doesn't matter in Java with HotSpot, because that is capable of
inlining a virtual method until it sees something overriding it, and
then undoing the optimisation.
It's hard to know for sure until
you're in that position, but I believe that I'd opt for making everything
sealed by default, and only virtual where I specifically want it.

I'd agree with that, but for design reasons rather than performance
reasons.
In addition to the performance issue, I also think there's a legitimate
design reason for making sealed the default. Most of my classes contain
many more private members than public. Lots of little helper functions,
for example, that will never be overridden. Sure, I could explicitly make
those all "final" in Java, but what a pain that would be.

C# did the right thing when it came to methods. Shame about classes :(

(I think private methods are final by default in Java though - you
certainly can't make private methods virtual in C#.)
Likewise, even for most of the public methods, I don't intend for the
methods to be overridden. I suppose there, there's less reason to be
picky about that, but I think it can still be dangerous.

I'd say there's much *more* reason to be picky about public methods -
you're defining a contract. You can change your private stuff later
without fear of breaking things, which isn't true of public methods.
IMHO, allowing a member to be overridden requires some forethought
as to how that member might be overridden.

Exactly. See http://msmvps.com/blogs/jon.skeet/archive/2006/03/04/inheritancetax.aspx
for more on my view of this.
So to some extent, on top of my performance bias, for me it comes down to
the frequency of one or the other. Since my virtual members are almost
always outnumbered by a large proportion by the sealed members, making
virtual the default doesn't make sense. Either I wind up with stuff
that's virtual and really shouldn't be (which is what's happening to me in
Java right now), or I waste a lot of time explicitly making things sealed.

For me, the most appropriate default is the one which can do the least
damage. Even if I were to want more public methods than private
methods, I like the default being private in C#. If you make a method
more private than it should be, you tend to find out about it quickly
because people can't access it. If you make a method more *public*
than it should be, you'll only find out about it when you want to
change it later and find that it's part of a contract.
Even in my so-far-brief foray into Java, I can see some value in making
everything virtual by default. It leads to very flexible inheritance. On
the other hand, it leads to very flexible and sometimes unpredictable
inheritance too. :) I'm not sure that encouraging developers to make
everything virtual is really the way to go. It might be fine for people
who live and breathe OOP and love the infinite possibilities. But for
someone like me who basically uses OOP, or any language for that matter,
just as a means to some other end (like getting my program to work :) ),
it just seems to get in the way and make things riskier.

When methods are virtual, you bleed implementation details. If I have
two virtual methods, X and Y, and X calls Y, I have to document that -
because otherwise someone might override Y to just call X. That may
look legitimate and *be* legitimate with an alternative
implementation, but in that particular implementation it would lead to
a stack overflow.

I don't like bleeding implementation details, thus I don't like having
too many virtual methods.

The one benefit I can see of making everything virtual is that it
makes testing easier. I don't think the cost is worth the benefit
though. (Many TDDers see the benefit, but I don't believe they all
consider the cost.)

Jon
 
S

Scott Roberts

(I think private methods are final by default in Java though - you
certainly can't make private methods virtual in C#.)


I'd say there's much *more* reason to be picky about public methods -
you're defining a contract. You can change your private stuff later
without fear of breaking things, which isn't true of public methods.

Seems like it might be nice if the default "virtual-ness" of a method
depended on its accessibility. I would think that private methods would not
ever be virtual. Protected methods probably should be virtual by default,
otherwise why are they protected? Public may be debatable, but you guys make
good points.
 
J

Jon Skeet [C# MVP]

Seems like it might be nice if the default "virtual-ness" of a method
depended on its accessibility. I would think that private methods would not
ever be virtual. Protected methods probably should be virtual by default,
otherwise why are they protected? Public may be debatable, but you guys make
good points.

There's plenty of reason to have non-virtual protected methods - you
can still *call* them from derived classes, even if you can't override
them.

As for private methods being virtual - they can't be at the moment,
but there's *potentially* a use for them with nested classes, which
have access to private methods of their parent classes. If you had:

class Outer
{
private virtual void Foo() { }

class Nested : Outer
{
private override void Foo() { ... }
}
}

But that's certainly an exceptional case :)

Jon
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top