"All public methods should be virtual" - yes or no / pros & cons

  • Thread starter Thread starter Ken Brady
  • Start date Start date
Implementation and interface though, beit for a single class or for
It does. It makes clients independand of
the implementation. They only depend on
the interface.

It is not the physical separation of interface and implementation that makes
clients independent of a class's implementation, it is you not messing with
the interface that makes clients independent of the implementation.
I rarely ever change a header.
(I'd be dead within a week if I did change
interfaces half as often as I change their
implementations. People would be queueíng
at the table tennis room while their
machines are locked with the compiler
running and had a _lot_ of time to consider
what bad things to do to me. :o> )

And your point is? Whether you change interfaces a lot or not highly depends
on the fase of the project and on your role in it. If your interfaces are
established, good for you.
Althoguh I think it is possible, it is uncommon to lock
files using CVS. I have never done it and haven't heard
from anyone doing it here.

I am not sure what you mean with CVS. When I say "lock" I am refering to a
source control system like SourceSafe, MKS or PVCS.

You should lock both interface and implemantation to make sure no one else
will change the interface while you are working on the implementation. They
are one. I know that it is unlikely someone will, I was just making the
point that interface and implementation are logically one (we are still
talking source code for a class).
You're kidding, aren't you?
Suppose I have a class 'X', used in just
about every part of a big project. Now I
found a bug in 'X::f()' and need to fix
that. Why would I need to lock/change the
interface?

Just lock, not change.
I don't know C# and I don't know its
namespace system. What I know is this: If
I cause everybody here to recompile just
because I fix some implementation, I'd
be in serious trouble.

We don't have a disagreement, you just misunderstood what I was saying.
Joining interface and implementation in one file does not break the
interface when you update the implementation.

Martin.
 
I like a seperate file with an overview of what your interface for a class
is (speaking about implementation of classes). Now that I code JAVA
sometimes, I really miss the headers, and the feeling doesn't go away.
The problem is that such an overview cannot be generated from just a source
file, because it isn't stable and could contain bugs that do not allow it to
be generated. A side bar with an alfabetically sorted list with members and
stuff is to my feeling not enough. A header file that groups functionality
and members etc gives me a nicer overview.

I haven't done any substantial C# project yet so I didn't feel it yet but I
may feel the same in the near future. I hope I will just get used to it,
syntax has become clearer in the way that things are being declared loaclly,
less surprises or pitfalls. But I agree the overall code image seems to have
suffered. For one thing, I prefer this:

TMyThingy = class(TObject)
private:
Field1 :integer;
Field2 :string;
procedure method1();
procedure method2();
protected:
procedure method3();
public:
constructor Create
destructor Destroy;
procedure method4();
end;


over this:


TMyThingy = class(TObject)
private Field1 :integer;
private Field2 :string;
private procedure method1();
private procedure method2();

protected procedure method3();

public constructor Create
public destructor Destroy;
public procedure method4();
end;


The sections are lost, it's not pretty! :-(

Martin.
 
Martin Maat said:
[...] It makes clients independand of
the implementation. They only depend on
the interface.

It is not the physical separation of interface and implementation that makes
clients independent of a class's implementation, it is you not messing with
the interface that makes clients independent of the implementation.

So when you checkin a c# file that you
have changed without changing anything
of the interface, does everybody have
to recompile the dependend modules?
I rarely ever change a header.
[...]

And your point is? [...]

Why should I lock a header if I won't
change it???
I am not sure what you mean with CVS. When I say "lock" I am refering to a
source control system like SourceSafe, MKS or PVCS.

CVS, the version control system.
(www.cvshome.org)
After years of trouble with VSS, we
switched to CVS. I was not happy about
that decision back then (although I
wanted to get rid of VSS as soon as
possible). But I learned to like CVS.
You should lock both interface and implemantation to make sure no one else
will change the interface while you are working on the implementation. They
are one. I know that it is unlikely someone will, I was just making the
point that interface and implementation are logically one (we are still
talking source code for a class).

As I said, I never lock anything. If
anyone changed anything I was working
on, I merge this into my local version
(and test it) before I checkin the
file(s).
[...]
Suppose I have a class 'X', used in just
about every part of a big project. Now I
found a bug in 'X::f()' and need to fix
that. Why would I need to lock/change the
interface?

Just lock, not change.

Why would I need to lock the interface?
[...]
Joining interface and implementation in one file does not break the
interface when you update the implementation.

I don't know if C# prevents recompilation
of dependend modules if you change a module
without changing any declarations. Maybe it
does so.
However, with the interface/implementation
separated, you have a nice check to make
sure you didn't change anything accidentally,
as your implementation most often won't
compile if you did.


Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
It is not the physical separation of interface and implementation that
makes
So when you checkin a c# file that you
have changed without changing anything
of the interface, does everybody have
to recompile the dependend modules?

If they care for the update they may want to recompile. If they don't, they
will not recompile. I'm not sure I understand what you are worried about.
Why should I lock a header if I won't change it???

To make sure that once you check in your updated implementation and unlock
your interface, the result will be consistent. You will be sure the
interface that matches the implementation has not been changed.
CVS, the version control system.
(www.cvshome.org)

Are you dislectic by any change? Ah, CVS stands for "Concurrent Versions
System". What was so bad about Visual SourceSafe?
As I said, I never lock anything. If anyone changed anything I was working
on, I merge this into my local version (and test it) before I checkin
the file(s).

Oh, that's nice. Why not keep everything on your local machine and ship that
version once it is time to release?

You are proving my point. If you do lock the stuff that belongs together no
one can change anything while you're working on it.
Why would I need to lock the interface?

I give up, you win.
I don't know if C# prevents recompilation
of dependend modules if you change a module
without changing any declarations. Maybe it
does so.

You seem awefully afraid of recompilation. Could you be working on this one
big monolithic application in which everything depends on everything for
which any change will cause a chain reaction in the C++ compiler, rendering
your machine useless for 20 minutes? Could it be someone influencial in your
company declared "Build All" the only way to compile and had the compile
option removed from all of your Visual Studio installations? I don't see the
problem, recompilation of one class should not be issue.

Martin.
 
Martin Maat said:
[...]
So when you checkin a c# file that you
have changed without changing anything
of the interface, does everybody have
to recompile the dependend modules?

If they care for the update they may want to recompile. If they don't, they
will not recompile. I'm not sure I understand what you are worried about.

I am worried about having to recompile
files that depend on the interface of a
module when only its implementation
changed.
To make sure that once you check in your updated implementation and unlock
your interface, the result will be consistent. You will be sure the
interface that matches the implementation has not been changed.

I am sure, since I merge any changes
into my code before I it check in.
Are you dislectic by any change?
???

Ah, CVS stands for "Concurrent Versions
System". What was so bad about Visual SourceSafe?

We needed to run analyze every night and
still lost changes since it didn't repair
everything the clients messed up. Branching
is a PITA in VSS. Remote access, too.
AFAIK, MS doesn't use VSS for their own
software. And I can understand this.
Oh, that's nice. Why not keep everything on your local machine and ship that
version once it is time to release?

Because that would hurt.
We have dedicated build machines for
this. If I need a release, a script
will do a clean check out on a label,
build this version, run some tests,
and build the installer.
The result will eventually be shipped.
You are proving my point. If you do lock the stuff that belongs together no
one can change anything while you're working on it.

Since you haven't even heard of CVS,
how do you think you can judge the way
it is used to work? The way I described
(Have you ever looked at SourceForge?
Can you imagine this be to done using
VSS? With the developers spread all
over the world?)
I give up, you win.

No, it was a serious question.
You seem awefully afraid of recompilation. Could you be working on this one
big monolithic application in which everything depends on everything for
which any change will cause a chain reaction in the C++ compiler, rendering
your machine useless for 20 minutes? Could it be someone influencial in your
company declared "Build All" the only way to compile and had the compile
option removed from all of your Visual Studio installations? I don't see the
problem, recompilation of one class should not be issue.

Currently I work on a 700kLOC+ app. Of
this, 500kLOC+ were written in-house. And
that's not old, longish C code. All C++,
the oldest code ~5 years old, much of it
done one year ago; a lot of care went into
preventing redundancy and dependencies,
and into clean modularization. (We test
modules individually, after all). Yet, a
full rebuild of all the projects involved
takes ~60min. If I ever wanted to rebuild
all the test projects, too, it would take
another few hours.
This is one of those projects, where, if
you are going to do some major changes to
some part of it, you first write a test
program for this, then do your changes,
and test them in the main app only after
you are sure the (re-)design done and
you found most of the mistakes you made.
It simply is faster this way.

Recompilation of a class' implementation
is no issue. OTOH, changing the interface
of a class is something not done without
consideration in such a project, as it
forces all dependend modules to be re-
compiled.
I definitely wouldn't want to do do such
an app using a language where a change in
the implementation of one class forces a
recompilation of all modules depending
on its interface.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
First, virtual methods do not come free, they perform worse than non-virtual

This is a generalized statement regarding an *IMPLEMENTATION* of the language,
not the language itself.
Another thing. If something is declared virtual, that is a statement on the
part of the designer. It implies some generic behavior that may need to be
altered somehow for any derived.class in order to obtain the desired...

It doesn't/shouldn't imply anything. If the user relies in implications, they
are going to have problems with their code regardless of the construct at hand.

Remember, that virtual doesn't mean that the overriding method has to do
*anything*, it may be simply tracking, monitoring, or triggering a response.
These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/
changing behavior.
Imagine every public method of a fairly complex class being virtual. Most of
them will implement fixed behavior that is not supposed to be overridden.

If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
virtual. If they shouldn't be overridden in such a way as to change behavior,
they should be virtual and that fact should be in the documentation.
It would only invite developers to screw things up and they would not
understand what is expected of them.

Then document it better. Like I said, if they have to rely on implications or
assumptions, they are going to screw it up, regardless of virtuality.

** Don't protect me from myself **
 
Bret Pehrson said:
This is a generalized statement regarding an *IMPLEMENTATION* of the language,
not the language itself.

Can you tell us about any implementation
where this isn't true? Or even only
describe how such an implementation would
work?
It doesn't/shouldn't imply anything. If the user relies in implications, they
are going to have problems with their code regardless of the construct at hand.

Remember, that virtual doesn't mean that the overriding method has to do
*anything*, it may be simply tracking, monitoring, or triggering a response.
These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/
changing behavior.

The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.
If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
virtual. If they shouldn't be overridden in such a way as to change behavior,
they should be virtual and that fact should be in the documentation.

see above
Then document it better. Like I said, if they have to rely on implications or
assumptions, they are going to screw it up, regardless of virtuality.

If a desing expresses its intention, that's
a lot bettern than having to read a lot of
documentation in order to understand the
intention.
** Don't protect me from myself **

Don't expect us to carefully read douments
that contradict your code.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
Can you tell us about any implementation
where this isn't true? Or even only
describe how such an implementation would
work?

I'm not a compiler writer, nor a hardware design engineer. The fact remains,
the original statement is about an implementation (whether or not it applies to
all current implementations or not is irrelevant).

With hardware advanced as it is, especially w/ predictive processing/branching,
you can't assume anything about performance.
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.
Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.


Hendrik said:
Bret Pehrson said:
This is a generalized statement regarding an *IMPLEMENTATION* of the language,
not the language itself.

Can you tell us about any implementation
where this isn't true? Or even only
describe how such an implementation would
work?
It doesn't/shouldn't imply anything. If the user relies in implications, they
are going to have problems with their code regardless of the construct at hand.

Remember, that virtual doesn't mean that the overriding method has to do
*anything*, it may be simply tracking, monitoring, or triggering a response.
These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/
changing behavior.

The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.
If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
virtual. If they shouldn't be overridden in such a way as to change behavior,
they should be virtual and that fact should be in the documentation.

see above
Then document it better. Like I said, if they have to rely on implications or
assumptions, they are going to screw it up, regardless of virtuality.

If a desing expresses its intention, that's
a lot bettern than having to read a lot of
documentation in order to understand the
intention.
** Don't protect me from myself **

Don't expect us to carefully read douments
that contradict your code.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
And what language do you think will be used most in the CLI world and
where the jobs are :D C#

You don't get it, do you? There are classes of applications for which
C#/Java is either too expensive in terms of ROM/RAM footprint (systems where
the hardware costs are considerably higher than the one for software) or
simply the wrong choice (hard realtime systems, device drivers). For these
applications C++ will be *the* language for *at least* another decade. C#
and Java are here to stay and that's a good thing but these languages will
not be able to to take over everything from C++.

*Every* language has it's pros and cons and language wars are thus just
plain pointless, especially when you argue with beliefs instead of technical
facts...

Regards,

Andreas
 
No but hey, tell that to the employers out there advertising for C# skills
:D

What do I know eh.

Sure with hardreal time you dont want dynamic memory allocation, obviously
captain obvious. This are specalized cases, for the more run of the mill
applications C# is perfect with short project cycles (which is more and more
common) and managibility of bugs and the current drive for more security via
a managed environment.

Ofcourse C++ is still used but watch C# take the mainstream applications and
C++ for specific interop and time / memory critical applications. C# will
be the dominant language in the mangaged world and C+/CLI for specific needs
however this will be kept and should be kept to a minimum as the
managability of that code is too tangled and messy.

Alot of redesign of applications I have worked on are all going C#. They are
moving away from Propriety languages like VB and java towards more
standardized ones for level playing fields and less lockin from the likes of
MS and Sun.
 
Can you tell us about any implementation where this isn't true? Or
even only
I'm not a compiler writer, nor a hardware design engineer. The fact remains,
the original statement is about an implementation (whether or not it applies to
all current implementations or not is irrelevant).

Ah, "I am ignorant so you can't touch me". The trouble with that philosophy
is that there will be other ignorant but nontheless interested and eager to
learn people taking note of your blunt and uninformed statements. So please
be a little more cautious.
With hardware advanced as it is, especially w/ predictive processing/branching,
you can't assume anything about performance.

Polymorphism costs, no matter what the technology will be. There are more
entities involved (memory, lookup tables) and more steps to be taken
(processing). You might argue "for my application this is insignificant" or
"I don't care" but no technology is going to equalize the difference.
Not true.

And then you provide an example demonstrating just what you are denying.
I do expect you to carefully read documents that define the behavior,
usage, and intent of my interfaces.

Documentation should be the second line of support. Your code is the first.
The point made is that it isn't bad if your code leaves one wondering so one
will fall back onto the documentation. What is bad though is when your code
suggests something that isn't true so one will think one understands and one
will proceed with the wrong idea. Sort of like reading one of your posts and
not reading the responses to it because the post was so sure and
self-confident that the guy obviously knew what he was talking about.

Martin.
 
I just developed a process control system in C# in a matter of months from
design to final shipping.

The performance was way on par with unmanaged code, it was half of what our
requirments was and perfectly more than acceptable performance , actualy it
was more than what we expected and we havnt even done an performance
optimization run on it. This is a very real world time critical appliation
(cycle times in an automated environment with lots of variables like
lighting and continual movement of items).

There is no need for the high risk of unmanaged C++ with the long
development times for this appliation. C# is more than adequate. This is a
time critical application. Automation robotics and vision. cycle times are
very very important. This just hardens my confidence in C# as a real
alternative to high performing real world automation.
 
Bret Pehrson said:
[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)
???

I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
Ah, "I am ignorant so you can't touch me".

Try re-reading. My original point was that you can't assume anything about
performance, because it is strictly tied to the implementation (and underlying
hardware).

I have yet to read *anything* in either the C or C++ spec that deals w/
performance. The thread is about the positives and negatives of MI, not
implementation-specific or performance.
And then you provide an example demonstrating just what you are denying.

Nay, my good friend. My example does NOT change the behavior of the
**CLASS**. Perhaps you would care to elaborate as to why you think it does...
Documentation should be the second line of support. Your code is the first.

You code is the first line if you are working *in the code*. Although not
explicitly defined as such, this thread has been primarily limited to the
presumption that only the interface is avaialble (i.e. header files). My
documentation comments are based on that.

Honestly, with the exception of the performance issue(s) (which I feel don't
belong in this discussion), the only reasons that I've heard against MI are
'poor style' (with absolutely no justification) and 'bad documentation' --
funny thing is that neither of these reasons have anything specifically to do
w/ MI, but are really effects of deeper problems and/or a lack of dicipline,
schooling, etc.

Excluding the 'dreaded diamond' issue of MI, can someone substantively say why
MI should *NOT* be part of a language???
 
This does change behaviour. (And I don't
think I want/need to tell you how.)

Then don't post a response! We are (presumably) all here for constructive and
educational reasons, so you statement provides nothing and actually confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll probably
learn something.
I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik said:
Bret Pehrson said:
[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)
???

I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
Bret Pehrson said:
This is a generalized statement regarding an *IMPLEMENTATION* of the language,
not the language itself.


It doesn't/shouldn't imply anything. If the user relies in implications, they
are going to have problems with their code regardless of the construct at hand.

Remember, that virtual doesn't mean that the overriding method has to do
*anything*, it may be simply tracking, monitoring, or triggering a response.
These are all *VALID* uses of virtual, and have absolutely NOTHING to do w/
changing behavior.
overridden.

If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
virtual. If they shouldn't be overridden in such a way as to change behavior,
they should be virtual and that fact should be in the documentation.
In my opinion they should be nonvirtual and internally call a
protected(private in C++) virtual method that may be overridden. Look up
"Template Method" and "Design by Contract". Making a public method virtual
means that you give your client no guarantee what so ever as to what will
happen when they invoke the method on a subclass. It also means that you've
made at statement that when I inherit your class I may replace the
functionality of the method. If what you mean can be expressed clearly in
code, you should do so.You shouldn't write code that says one thing and say
another in the documentation.

/Magnus Lidbom
 
Bret Pehrson said:
Then don't post a response! We are (presumably) all here for constructive and
educational reasons, so you statement provides nothing and actually confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.
I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik said:
Bret Pehrson said:
[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)
Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
Simple, you do *anything* and what your class does changes.

No, no, NO!

Come on, my example doesn't change the behaior of the class -- period. It may
(and very well does) change the state or behavior of the *SYSTEM*, but we are
*not* talking about that.

The point was, is, and will continue to be: virtual methods can be used to
change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which
is what I've illustrated.

Daniel O'Connell said:
Bret Pehrson said:
Then don't post a response! We are (presumably) all here for constructive and
educational reasons, so you statement provides nothing and actually confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.
I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik said:
[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)

Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior, usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

[...]

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
Back
Top