"All public methods should be virtual" - yes or no / pros & cons

B

Bret Pehrson

In my opinion they should be nonvirtual and internally call a
protected(private in C++) virtual method that may be overridden. Look up
"Template Method" and "Design by Contract". Making a public method virtual
means that you give your client no guarantee what so ever as to what will
happen when they invoke the method on a subclass. It also means that you've
[snip]

You make it sound like the norm is for a programmer to use an existing class
library, add a couple of overridden methods, and then repackage that and
hand/sell it off to someone else.

In most cases, anyone that is overriding a method to an established interface
is ultimately the end user, and (provided they are informed and careful
programmers), has no need to adhere to the original 'design by contract', per
se.

We really need to make sure that we don't confuse a designer w/ a user -- I
understand the potential problems w/ a designer that overrides a previous
designer's work, but my discussions have been limited to the end user of an
interface that has the desire to monitor or change behavior (that will in
most/all cases not be subsequently used by someone else).
 
M

Magnus Lidbom

Bret Pehrson said:
Magnus said:
In my opinion they should be nonvirtual and internally call a
protected(private in C++) virtual method that may be overridden. Look up
"Template Method" and "Design by Contract". Making a public method virtual
means that you give your client no guarantee what so ever as to what will
happen when they invoke the method on a subclass. It also means that
you've
[snip]

You make it sound like the norm is for a programmer to use an existing class
library, add a couple of overridden methods, and then repackage that and
hand/sell it off to someone else.

In most cases, anyone that is overriding a method to an established interface
is ultimately the end user, and (provided they are informed and careful
programmers), has no need to adhere to the original 'design by contract', per
se.

We really need to make sure that we don't confuse a designer w/ a user -- I
understand the potential problems w/ a designer that overrides a previous
designer's work, but my discussions have been limited to the end user of an
interface that has the desire to monitor or change behavior (that will in
most/all cases not be subsequently used by someone else).
So you feel that if at the present moment you don't expect anyong else to
become a client of your code, lets not get into the odds of that, you should
write code that would be unacceptable if anyone else needs to use it?

You said:
"If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
vrtual. If they shouldn't be overridden in such a way as to change
behavior, they should be virtual and that fact should be in the
documentation."

So you feel that one should write code that doesn't express your intent and
then document the intent when you don't expect it to be used by others? My
opinion is that you should always write code that expresses your intent. In
this case it's also very simple to do so:

protected virtual void Extention(){}
public void DoStuff()
{
//dostuff
Extention();
}

/Magnus Lidbom
 
M

Martin Maat [EBL]

I have yet to read *anything* in either the C or C++ spec that deals w/
performance. The thread is about the positives and negatives of MI, not
implementation-specific or performance.

Aren't you confusing different news threads here? This is about polymorphism
through virtual methods and whether it is appropriate to apply it as a means
of implementing a plug-in mechanism.
My example does NOT change the behavior of the **CLASS**.
Perhaps you would care to elaborate as to why you think it does...

The behavior of the class is changed by sub-classing it. What is the point
of stating that a base class is not changed by the descendant? The way we
look at it there is no point in creating a descendent class if not either
behavior or data members are changed./ extended.
You code is the first line if you are working *in the code*. Although not
explicitly defined as such, this thread has been primarily limited to the
presumption that only the interface is avaialble (i.e. header files). My
documentation comments are based on that.

For interface definitions it is even more important that they are
self-explanatory and clear on their intended use than it is for
implementation code.
the only reasons that I've heard against MI are 'poor style' (with
absolutely no justification) and 'bad documentation' --

What do you mean with MI ? "MI" has not been part of the discussion I have
been participating in.

Martin.
 
M

Martin Maat [EBL]

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

I think you missed the point of the discussion. We are not against
documentation, neither does any of us feel strongly it should be made
redundant by code at all times. It was about particular design patterns,
whether they could be labeled "misleading code constructs" and if it was a
good thing to use them or not. To what extend we should assume coders to
recognize these patterns and understand the intend, if we should better not
use them anymore in C# now that it has more natural solutions to the
initially targeted problem like delegates.

So when someone sais "If the code does not tell the whole story, there's
always the documenttion to make up for that" we said that's no good because
once the code has given us the wrong impression yet the feeling that we
understand we are not likely to read the ducumentation.

Martin.
 
B

Bret Pehrson

What do you mean with MI ? "MI" has not been part of the discussion I have
been participating in.

Yep, my mistake. MI discussions are in a different thread, and I confused
them.
 
B

Bret Pehrson

Ya, I've lost interest in the thread.

Thanks for the discussions/comments/interactions. I've learned some new
things, and will apply them to my coding practices.

Magnus said:
Bret Pehrson said:
Magnus said:
In my opinion they should be nonvirtual and internally call a
protected(private in C++) virtual method that may be overridden. Look up
"Template Method" and "Design by Contract". Making a public method virtual
means that you give your client no guarantee what so ever as to what will
happen when they invoke the method on a subclass. It also means that
you've
[snip]

You make it sound like the norm is for a programmer to use an existing class
library, add a couple of overridden methods, and then repackage that and
hand/sell it off to someone else.

In most cases, anyone that is overriding a method to an established interface
is ultimately the end user, and (provided they are informed and careful
programmers), has no need to adhere to the original 'design by contract', per
se.

We really need to make sure that we don't confuse a designer w/ a user -- I
understand the potential problems w/ a designer that overrides a previous
designer's work, but my discussions have been limited to the end user of an
interface that has the desire to monitor or change behavior (that will in
most/all cases not be subsequently used by someone else).
So you feel that if at the present moment you don't expect anyong else to
become a client of your code, lets not get into the odds of that, you should
write code that would be unacceptable if anyone else needs to use it?

You said:
"If they MUST NOT be overridden for **ANY** reason, then they shouldn't be
vrtual. If they shouldn't be overridden in such a way as to change
behavior, they should be virtual and that fact should be in the
documentation."

So you feel that one should write code that doesn't express your intent and
then document the intent when you don't expect it to be used by others? My
opinion is that you should always write code that expresses your intent. In
this case it's also very simple to do so:

protected virtual void Extention(){}
public void DoStuff()
{
//dostuff
Extention();
}

/Magnus Lidbom
 
N

n!

I'm on a team building some class libraries to be used by many other
projects.

Some members of our team insist that "All public methods should be virtual"
just in case "anything needs to be changed". This is very much against my
instincts. Can anyone offer some solid design guidelines for me?

Thanks in advance....

It's too big to be a space station.

n!
 
B

Bret Pehrson

The behavior of the class is changed by sub-classing it. What is the point
of stating that a base class is not changed by the descendant? The way we
look at it there is no point in creating a descendent class if not either
behavior or data members are changed./ extended.

I agree that overriding to change behavior is half of the story. The other
half overriding to create/monitor/track an object's methods. I tried giving an
example of such a case before, just to point out that you can't limit 'virtual'
discussions just to the modification of behavior since it is perfectly
legitimate to override a method and *not* change any behavior.
 
D

Daniel O'Connell [C# MVP]

Bret Pehrson said:
No, no, NO!

Come on, my example doesn't change the behaior of the class -- period. It may
(and very well does) change the state or behavior of the *SYSTEM*, but we are
*not* talking about that.
If your method changes the system, then the behaviour of your class now
includes that change to the system, a classes behaviour is *EVERYTHING*, not
just the class state. Even if it doesn't change the result of the call it
still changes behaviour(and even in your example it *could* change the
result of the call by throwing an exception). Its likely behaviour you
*must* document, and treat as a behaviour change.

Of course, if you don't consider adding the chance of new exceptions, new
ways to fail, or new possible constraints on parameters a change in
behaviour, I think our definitions of the word differ.

In everything but an idealized world, an override should be automatically
considered a change in behaviour. Even if you can guarentee ironclad that it
works without any behaviour change, you still leave the chance that a change
*elsewhere* could change behaviour elsewhere(which is part of the problem, a
code change *anywhere* is a change in behaviour, potentially in the entire
system, even if its not entirely noticable or anything that actually effects
change).
The point was, is, and will continue to be: virtual methods can be used to
change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which
is what I've illustrated.

Daniel O'Connell said:
Bret Pehrson said:
This does change behaviour. (And I don't
think I want/need to tell you how.)

Then don't post a response! We are (presumably) all here for
constructive
and
educational reasons, so you statement provides nothing and actually confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.
I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO* comments).
If *you* can take naked prototypes and definitions and understand the usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik Schober wrote:

[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world
case
of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)

Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the
behavior,
usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

[...]

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
M

Martin Maat [EBL]

I agree that overriding to change behavior is half of the story. The other
half overriding to create/monitor/track an object's methods. I tried giving an
example of such a case before, just to point out that you can't limit 'virtual'
discussions just to the modification of behavior since it is perfectly
legitimate to override a method and *not* change any behavior.

Okay, I understand your point now. You are saying it is not a crime to use
polymorphism as an event mechanism. For debugging purposes as in your
example I do not see any harm either. As a design pattern though I say it
should not be done if a more dedicated events mechanism is available in the
particular development environment.

That summarizes most of the noise we all generated on the subject. :)

Martin.
 
A

Andreas Huber

.. said:
I just developed a process control system in C# in a matter of months
from design to final shipping.

The performance was way on par with unmanaged code, it was half of
what our requirments was and perfectly more than acceptable
performance , actualy it was more than what we expected and we havnt
even done an performance optimization run on it. This is a very real
world time critical appliation (cycle times in an automated
environment with lots of variables like lighting and continual
movement of items).

There is no need for the high risk of unmanaged C++ with the long
development times for this appliation. C# is more than adequate.
This is a time critical application. Automation robotics and vision.
cycle times are very very important. This just hardens my confidence
in C# as a real alternative to high performing real world automation.

I mostly share the views you express in your last two posts. However, I
guess the automation bit did not include any hard realtime stuff in the
millisecond range, otherwise I'd be interested how you managed to guarantee
the steering to be on time always.

Regards,

Andreas
 
B

Bret Pehrson

In everything but an idealized world, an override should be automatically
considered a change in behaviour.

You have missed my point completely. I'll summarize (again), and then I'm
done.

There is more than one reason to create virtual methods:

1 - to allow a subsequent user to change behavior

2 - to allow a subsequent user to track/monitor methods or state data *without*
modifying behavior

There are probably other reasons as well...

Forget the example, it was merely to illustrate my point #2 above, NOT to be a
point of endless debate on what possible ramifications trace() has on the
system, etc. ad naseum.

Daniel O'Connell said:
Bret Pehrson said:
No, no, NO!

Come on, my example doesn't change the behaior of the class -- period. It may
(and very well does) change the state or behavior of the *SYSTEM*, but we are
*not* talking about that.
If your method changes the system, then the behaviour of your class now
includes that change to the system, a classes behaviour is *EVERYTHING*, not
just the class state. Even if it doesn't change the result of the call it
still changes behaviour(and even in your example it *could* change the
result of the call by throwing an exception). Its likely behaviour you
*must* document, and treat as a behaviour change.

Of course, if you don't consider adding the chance of new exceptions, new
ways to fail, or new possible constraints on parameters a change in
behaviour, I think our definitions of the word differ.

In everything but an idealized world, an override should be automatically
considered a change in behaviour. Even if you can guarentee ironclad that it
works without any behaviour change, you still leave the chance that a change
*elsewhere* could change behaviour elsewhere(which is part of the problem, a
code change *anywhere* is a change in behaviour, potentially in the entire
system, even if its not entirely noticable or anything that actually effects
change).
The point was, is, and will continue to be: virtual methods can be used to
change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which
is what I've illustrated.

Daniel O'Connell said:
This does change behaviour. (And I don't
think I want/need to tell you how.)

Then don't post a response! We are (presumably) all here for constructive
and
educational reasons, so you statement provides nothing and actually
confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll
probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for
anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO*
comments).
If *you* can take naked prototypes and definitions and understand the
usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik Schober wrote:

[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case
of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)

Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior,
usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

[...]

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
J

Justin Rogers

This look as if it has been going for a while now, but I'd say, if you want all
public methods to be virtual by default (I'm guessing you are wanting C# to do
this
by default than by making you explicitly doing it), then change your language.
JScript .NET
allows for the feature, but that is because the language is feature oriented and
not performance
oriented. C# is performance oriented and every single virtual method lookup
incurs a
performance hit over a non virtual method. The performance increase is enough
that many of
the CLR classes were locked down to gain the performance rather than offer the
ability to
override the base behavior. In other words, a choice was made that performance
actually
outweighed flexibility.

So I'd have to say public methods should not be virtual by default, because it
would change the
performance characteristics and the security characteristics of all of my
current code should I
recompile it.

--
Justin Rogers
DigiTec Web Consultants, LLC.
Blog: http://weblogs.asp.net/justin_rogers

Bret Pehrson said:
In everything but an idealized world, an override should be automatically
considered a change in behaviour.

You have missed my point completely. I'll summarize (again), and then I'm
done.

There is more than one reason to create virtual methods:

1 - to allow a subsequent user to change behavior

2 - to allow a subsequent user to track/monitor methods or state data *without*
modifying behavior

There are probably other reasons as well...

Forget the example, it was merely to illustrate my point #2 above, NOT to be a
point of endless debate on what possible ramifications trace() has on the
system, etc. ad naseum.

Daniel O'Connell said:
Bret Pehrson said:
Simple, you do *anything* and what your class does changes.

No, no, NO!

Come on, my example doesn't change the behaior of the class -- period. It may
(and very well does) change the state or behavior of the *SYSTEM*, but we are
*not* talking about that.
If your method changes the system, then the behaviour of your class now
includes that change to the system, a classes behaviour is *EVERYTHING*, not
just the class state. Even if it doesn't change the result of the call it
still changes behaviour(and even in your example it *could* change the
result of the call by throwing an exception). Its likely behaviour you
*must* document, and treat as a behaviour change.

Of course, if you don't consider adding the chance of new exceptions, new
ways to fail, or new possible constraints on parameters a change in
behaviour, I think our definitions of the word differ.

In everything but an idealized world, an override should be automatically
considered a change in behaviour. Even if you can guarentee ironclad that it
works without any behaviour change, you still leave the chance that a change
*elsewhere* could change behaviour elsewhere(which is part of the problem, a
code change *anywhere* is a change in behaviour, potentially in the entire
system, even if its not entirely noticable or anything that actually effects
change).
The point was, is, and will continue to be: virtual methods can be used to
change behavior ***OR*** for other reasons that *DO NOT CHANGE BEHAVIOR*, which
is what I've illustrated.

:

This does change behaviour. (And I don't
think I want/need to tell you how.)

Then don't post a response! We are (presumably) all here for constructive
and
educational reasons, so you statement provides nothing and actually
confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll
probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may rain down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method A does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for
anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO*
comments).
If *you* can take naked prototypes and definitions and understand the
usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik Schober wrote:

[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and real-world case
of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)

Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior,
usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

[...]

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
D

Daniel O'Connell [C# MVP]

Bret Pehrson said:
You have missed my point completely. I'll summarize (again), and then I'm
done.

There is more than one reason to create virtual methods:

1 - to allow a subsequent user to change behavior

2 - to allow a subsequent user to track/monitor methods or state data *without*
modifying behavior

The point I'm trying to make is you *CANNOT* override a method without
changing behaviour, so your second point is moot, it is simply a more
specific version of the first. Overriding is a behaviour change, be it at
the method level, the class level, or the system level. Even if your method
looks like
public override void MyMethod()
{
base.MyMethod();
}

I would still argue that it changes the behaviour of the class, although not
significantly. I know what you intended to point out, I simply believe its a
false premise. If you write code with ramifications you are writing code
that changes behaviour, period(in effect, that reduces to "If you write
code, you are changing behaviour"). I simply find it silly and potentially
dangerous to decide otherwise.
There are probably other reasons as well...

Forget the example, it was merely to illustrate my point #2 above, NOT to be a
point of endless debate on what possible ramifications trace() has on the
system, etc. ad naseum.

Daniel O'Connell said:
Bret Pehrson said:
Simple, you do *anything* and what your class does changes.

No, no, NO!

Come on, my example doesn't change the behaior of the class -- period.
It
may
(and very well does) change the state or behavior of the *SYSTEM*, but
we
are
*not* talking about that.
If your method changes the system, then the behaviour of your class now
includes that change to the system, a classes behaviour is *EVERYTHING*, not
just the class state. Even if it doesn't change the result of the call it
still changes behaviour(and even in your example it *could* change the
result of the call by throwing an exception). Its likely behaviour you
*must* document, and treat as a behaviour change.

Of course, if you don't consider adding the chance of new exceptions, new
ways to fail, or new possible constraints on parameters a change in
behaviour, I think our definitions of the word differ.

In everything but an idealized world, an override should be automatically
considered a change in behaviour. Even if you can guarentee ironclad that it
works without any behaviour change, you still leave the chance that a change
*elsewhere* could change behaviour elsewhere(which is part of the problem, a
code change *anywhere* is a change in behaviour, potentially in the entire
system, even if its not entirely noticable or anything that actually effects
change).
The point was, is, and will continue to be: virtual methods can be
used
to
change behavior ***OR*** for other reasons that *DO NOT CHANGE
BEHAVIOR*,
which
is what I've illustrated.

:

This does change behaviour. (And I don't
think I want/need to tell you how.)

Then don't post a response! We are (presumably) all here for constructive
and
educational reasons, so you statement provides nothing and actually
confuses
the issue.

Please elaborate on why you think this changes class behavior. I'll
probably
learn something.
Simple, you do *anything* and what your class does changes. If that call
throws an exception you are adding a point of failure, if that call
instantiates objects you are using memory(and perhaps creating a memory
leak), if that call deletes random files you are probably angering users, if
that call changes an object or variable the underlying method uses in
another class you could introduce unexpected bugs and it all may
rain
down
on someone elses head because of it. Simply because your call leaves the
instance alone does *NOT* mean that its behaviour isn't changed, its state
simply isn't. Behaviour is considerably more than simply what Method
A
does
to Field B. Your example likely doesn't change the object but potentially
changes the entire universe the object exists in.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

I'm not following. Maybe my statements weren't clear, but my intention is
this: any well-meaning programmer that produces code potentially for
anyone
else (either directly or indirectly), should include complete and correct
documentation.

It is extremely difficult (at best) to determine expected behavior from
prototypes and definitions alone (meaning, no documentation, *NO*
comments).
If *you* can take naked prototypes and definitions and understand the
usage,
behavior, and characteristics of the interface, then you are in a definite
minority. Personally, I rely heavily on the documentation.

Hendrik Schober wrote:

[...]
The only reason to make a function virtual
is to allw it to be overridden. Overriding
a function is changing behaviour.

Not true.

class A
{
public:
virtual void a()
{
// do something
}
};

class B : public A
{
public:
virtual void a()
{
A::a();
trace("processing a");
}
}

This doesn't change behavior, but is a very valid and
real-world
case
of
virtual overrides.

This does change behaviour. (And I don't
think I want/need to tell you how.)

Don't expect us to carefully read douments
that contradict your code.

???

I do expect you to carefully read documents that define the behavior,
usage,
and intent of my interfaces.

I expect to read your headers, see and
recognize common patterns, understand
your identifiers, and use this interface
as it is with as little need for looking
it up in the docs as possible. If you
don't provide that, then that's one darn
good reason to look for another provider.

[...]

Schobi

--
(e-mail address removed) is never read
I'm Schobi at suespammers dot org

"Sometimes compilers are so much more reasonable than people."
Scott Meyers
 
G

Guest

In the millisecond range, yes how about 200 millisecond cycle times and its
not even optimised yet.
 
A

Andreas Huber

.. said:
In the millisecond range, yes how about 200 millisecond cycle times
and its not even optimised yet.

So you have hard realtime requirements in the 200 millisecond range? How do
you guarantee that the GC does not interfere? The worst-case collect time
obviously depends on a multitude of factors. MS says that a gen 0 collection
should never take longer than milliseconds. For higher generation
collections I don't really have any numbers but I can imagine that in rare
cases you might get over the 200 milliseconds. How do you guarantee that
this does not happen?

Regards,

Andreas
 
M

Martin Maat [EBL]

I guess the automation bit did not include any hard realtime stuff in
the
200 milliseconds is quite a long time for process automation.
This is a very real world time critical appliation (cycle times in an automated
environment with lots of variables like lighting and continual movement of
items).

Slow moving items I hope. I am aware of the fact that Neither "real time"
nor "time critical" are well defined qualifications but one doesn't think
bandwidths of about 5 Hz one is talking "real time".

Regardless the speed it is important to have some kind of safeguard in case
the controlling system fails.
So you have hard realtime requirements in the 200 millisecond range? How do
you guarantee that the GC does not interfere? The worst-case collect time
obviously depends on a multitude of factors. MS says that a gen 0 collection
should never take longer than milliseconds. For higher generation
collections I don't really have any numbers but I can imagine that in rare
cases you might get over the 200 milliseconds. How do you guarantee that
this does not happen?

The GC is on its own thread. So even if a GC round took more than a second,
it should not block your application longer than a couple of time slices.
Setting your own thread's priority a bit higher than normal should do it.

In a robotics application you will typically have a dedicated watchdog
computer whose sole task it is to verify that the controlling computer is
paying attention, that is if it is updating the servo controller signal
freqently enough. If the watchdog finds the controlling computer has not
been controlling for too long a period of time, power will be cut and brakes
will stop the mechanics.

Controlling a robot with your regular desktop PC machine running Windows XP
could work fine for days or for ever but if the robot were capable of
damaging equipment, damaging the product or killing a human (and most
industrial robots can easily do all in one swing) I would not lightly hook
up any robot directly to an office computer.

Dot's application may be less deadly but I wonder about the fail-safety of
his system. What happens if your control system stops responding, Dot?

Martin.
 
A

Andreas Huber

Martin said:
Slow moving items I hope. I am aware of the fact that Neither "real
time" nor "time critical" are well defined qualifications but one
doesn't think bandwidths of about 5 Hz one is talking "real time".

The definition I've seen (I think from Doing Hard Time by Douglass) sais
that hard realtime has nothing to do with speed but with deadlines. Missing
a deadline always means *uncorrectable* trouble. E.g. consider a printer
spraying the best-before date onto eggs being transported on a conveyor belt
that is moving independently of the spraying. If the computer controlling
the printer misses a deadline, an egg ends up having only half or no date.
Of course, the time window during which spraying can be successful depends
on the speed of the conveyor. For a slow conveyor this could be in the
several hundred milliseconds, for a fast one it could be well below one
millisecond. Therefore, for some hard realtime systems Windows could be the
right choice for others I wouldn't even dare to consider it.
BTW, if missing a deadline means that things are only slowed down without
doing any further harm then we have a soft rather than a hard realtime
system.
Regardless the speed it is important to have some kind of safeguard
in case the controlling system fails.


The GC is on its own thread. So even if a GC round took more than a
second, it should not block your application longer than a couple of
time slices.

I don't think so. To do a collection in a uniprocessor system the GC has to
suspend *all* threads that are currently running managed code. Only the
finalizers can be run in a separate thread while the other threads keep
minding their business.
The situation is a bit better in multiprocessor systems where - under
certain circumstances - collections can be done on one CPU while the others
keep running.
Setting your own thread's priority a bit higher than
normal should do it.

No, that won't help you during the collection itself (see above).
In a robotics application you will typically have a dedicated watchdog
computer whose sole task it is to verify that the controlling
computer is paying attention, that is if it is updating the servo
controller signal freqently enough. If the watchdog finds the
controlling computer has not been controlling for too long a period
of time, power will be cut and brakes will stop the mechanics.
Right.

Controlling a robot with your regular desktop PC machine running
Windows XP could work fine for days or for ever but if the robot were
capable of damaging equipment, damaging the product or killing a
human (and most industrial robots can easily do all in one swing) I
would not lightly hook up any robot directly to an office computer.

That's also my impression.

Regards,

Andreas
 
M

Martin Maat [EBL]

The definition I've seen (I think from Doing Hard Time by Douglass) sais
that hard realtime has nothing to do with speed but with deadlines. [...]
BTW, if missing a deadline means that things are only slowed down without
doing any further harm then we have a soft rather than a hard realtime
system.

Sounds like solid theory. Interesting.
I don't think so. To do a collection in a uniprocessor system the GC has to
suspend *all* threads that are currently running managed code.

Yes, I understand, but I expect the GC to not bluntly collect garbage until
there's no more garbage to be found with its own thread set to time
critical. I expect it to take caution that it will not be in the apps way by
doing as much as it possibly can in idle time and by, if it really needs to
interfere, collecting some garbage for a minimum period of time and then
return control. The aggression applied is likely to be proportional to the
need. If I (the application programmer) were to generate garbage
relentlessly, the GC will probably start fighting me for processing time at
some point. While I am being a good boy however, not giving it a reason to
pull my leash, I expect it to be very very gentle with me, only collecting
garbage for very short periods.

That is the way it should work in my opinion and I have confidence in the
smart people that designed the GC but I have to say that this is purely
common sense, speculation and wishfull thinking on my part so I would be
most interested to learn how it is really done from some insider.

Martin.
 
J

Justin Rogers

Yes, I understand, but I expect the GC to not bluntly collect garbage until
there's no more garbage to be found with its own thread set to time
critical. I expect it to take caution that it will not be in the apps way by
doing as much as it possibly can in idle time and by, if it really needs to
interfere, collecting some garbage for a minimum period of time and then
return control. The aggression applied is likely to be proportional to the
need. If I (the application programmer) were to generate garbage
relentlessly, the GC will probably start fighting me for processing time at
some point. While I am being a good boy however, not giving it a reason to
pull my leash, I expect it to be very very gentle with me, only collecting
garbage for very short periods.

Then you are expecting far too much from the GC. If the GC goes into a
collection
state it can and will hang indefinitely if given the chance. And yes it does
kick the
priority on the thread up giving itself a higher priority than any other code.
I've identified
an issue where finalizer code can pretty much lock your entire machine.

http://weblogs.asp.net/justin_rogers/archive/2004/02/01/65802.aspx
That is the way it should work in my opinion and I have confidence in the
smart people that designed the GC but I have to say that this is purely
common sense, speculation and wishfull thinking on my part so I would be
most interested to learn how it is really done from some insider.

The GC is a system that has no knobs that you can turn. You can't tell it that
it should be
gentle. In the most common programming case, a managed application simply needs
more
memory (the reason for the GC is that memory has been exhausted normally), so
the GC
collects as much memory as it can. While there may be some minor load-balancing
the
original post on Gen 2 collections often taking many hundreds of milliseconds
and sometimes
many seconds is the true real world case. That is simply how it works.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top