Destructor: not gauranteed to be called?

P

Peter Oliphant

I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code I have
a class derived from Form that creates an instance of one of my custom
classes via gcnew and stores the pointer in a member. However, I set a
breakpoint at the destructor of this instance's class and it was never
called!!! I can see how it might not get called at a deterministic time. But
NEVER?

So, I guess I need to know the rules about destructors. I would have thought
any language derived from C++ would always guarantee the destructor of an
instance of a class be called at some time, especially if created via
[gc]new and stored as a pointer.

Yes, I think I can deterministically destruct it via 'delete' and setting to
nullptr. But the point still kinda freaks me that the destructor is no
longer gauranteed to EVER be called. I feel like I should be worried since
it is sometimes important to do other things besided freeing up memory in a
destructor. In my case I discovered it becuase I'm communicating through a
serial port which I change the baud rate from the current speed, but then
changed it back in the destructor - only to find out the destructor was
NEVER called! Hence, the port died, and MY program wouldn't work on
subsequent runs since it assumed the port had been returned to the same baud
(and hence couldn't communicate with it anymore).

So, again, why is the destructor no longer gauranteed to be called, and what
are these new rules? Or am I being ignorant, and C++ never made such
assurances. Inquiring minds want to know! : )

[==P==]
 
C

Carl Daniel [VC++ MVP]

Peter said:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.

Why would you think that when C++ makes no similar guarantee for pure native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).
Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )

They're not new rules - it's the nature of objects on the heap. For managed
object on the GC heap, the Finalizer MAY be called if you don't delete the
object, but the CLR doesn't guarantee that finalizers are ever called
either.

-cd
 
J

Jochen Kalmbach [MVP]

Hi Carl!
Why would you think that when C++ makes no similar guarantee for pure native
C++? The destructor for an object on the heap is called when and if you
call delete on a pointer to that object. The situation is no different for
C++/CLI with respect the to destructor (which is IDisposable::Dispose for
C++/CLI).

Just as addition:

See: Destructors and Finalizers in Visual C++
http://msdn2.microsoft.com/en-us/library/ms177197.aspx

Also be aware that the desctructor might be called, even if the
constructor has thrown an expection!!!
See also: http://blog.kalmbachnet.de/?postid=60


--
Greetings
Jochen

My blog about Win32 and .NET
http://blog.kalmbachnet.de/
 
P

Peter Oliphant

/rant on

I'm sorry, but this is VERY new info to me, and I've been doing OOP for
about 15 years! Personally, I think it is against the whole concept of a
destructor. Why bother to ever create one if there is no gaurantee it will
be called? To me (IMHO), OOP should have this pact with the programmer. The
constructor is to set up the creation of an instance. The destructor is for
clean-up. Thus, the destructor should be gauranteed to be called SOMETIME,
at the very latest at application exit. Otherwise I feel the C++ laguage is
at fault for anything my destructor was meant to make sure wasn't left in a
bad state, since THAT's what I wrote the destructor for, and thought it was
responsible for making sure eventually happened.

Let me make this clear. I have always realized that when GC wasn't in play
that if I created something (then via 'new') I had to destruct it manually
to avoid memory leaks. That is, it was never gauranteed the destructor would
be called unless I invoked it via a delete call. But, with the introduction
of GC, anything created as a gc object shouldn't need to be destructed
manually, as the application is suppose to keep track of whether something
is being used anymore by anyone before GC destroys it. But I always assumed
it would destroy it the same way one would manually destroy it, by calling
it's destructor. Could someone explain to me why NOT calling the destructor
upon GC destroying the object would EVER be a GOOD thing?

What I see emerging is this. GC was created to help with the concept that
destruction of an object is tough to do when who 'owns' it is unclear, or
when it is unclear whether everyone's done with it. This caused memory leaks
in the case that 'nobody' took final responsibility (or couldn't based on
the info available). But the solution to this is now generating another
issue. Lack of reliable destruction! Destruction is now not gauranteed at
any time you don't specifically delete it. BUT WAIT! The whole point of GC
was to AVOID having to know when to do delete. So, if we are forced to do
delete to insure the destructor get run, then what did we gain from
introducing GC? That is, if we now still have to delete at the right time,
this implies we know the instance is free to be destroyed. Thus, we lose the
advantage we got. Or more precisely, we have add complication that
introduces more possible pitfalls, and we are STILL required to tell the
application when to destroy something if we want our destructors have any
reliable meaning!

Further, this is now causing aditional problems. I reported a bug that is
very nasty via the feedback center. The bug is this: Try creating two
classes, both ref. Now create 142 stack sematic variables of one class in
the other. Oh yeah, be sure the classes have destructors. Put ZERO code in
these classes. Guess what? It won't compile, and will return a 'program too
complex' error! It further explains it can't build the destructor. Now,
comment out the destructor in the class the 142 instances are based on. NOW
it compiles! So, they have introduced complexity to such a point with the
way it deals with destructors that it can't handle it past 142 members! I
don't see that as progress...

And, again possibly showing my ignorance, when did finalizers come into
play? Is this part of the C++ standard?

Basically, I think things have gotten so complicated in this destructor area
that we have just traded one set of problems for another. If I can't rely on
the code I write specifically for the purpose of tidying things up from ever
getting called, it aint my fault if stuff isn't returned back to normal once
my code is done running. Heaven forbid anyone put the 'return the system
back' code in the destructor of an application based on a single class...
; )

/rant off

Ok I feel much better now... lol

[==P==]

PS - here is link to bug I reported:

http://lab.msdn.microsoft.com/produ...edbackid=3f6131a9-7d0a-496a-b8a0-44bd02f398c6

Carl Daniel said:
Peter said:
I'm programming in VS C++.NET 2005 using cli:/pure syntax. In my code
I have a class derived from Form that creates an instance of one of
my custom classes via gcnew and stores the pointer in a member.
However, I set a breakpoint at the destructor of this instance's
class and it was never called!!! I can see how it might not get
called at a deterministic time. But NEVER?

So, I guess I need to know the rules about destructors. I would have
thought any language derived from C++ would always guarantee the
destructor of an instance of a class be called at some time,
especially if created via [gc]new and stored as a pointer.

Why would you think that when C++ makes no similar guarantee for pure
native C++? The destructor for an object on the heap is called when and
if you call delete on a pointer to that object. The situation is no
different for C++/CLI with respect the to destructor (which is
IDisposable::Dispose for C++/CLI).
Yes, I think I can deterministically destruct it via 'delete' and
setting to nullptr. But the point still kinda freaks me that the
destructor is no longer gauranteed to EVER be called. I feel like I
should be worried since it is sometimes important to do other things
besided freeing up memory in a destructor. In my case I discovered it
becuase I'm communicating through a serial port which I change the
baud rate from the current speed, but then changed it back in the
destructor - only to find out the destructor was NEVER called! Hence,
the port died, and MY program wouldn't work on subsequent runs since
it assumed the port had been returned to the same baud (and hence
couldn't communicate with it anymore).
So, again, why is the destructor no longer gauranteed to be called,
and what are these new rules? Or am I being ignorant, and C++ never
made such assurances. Inquiring minds want to know! : )

They're not new rules - it's the nature of objects on the heap. For
managed object on the GC heap, the Finalizer MAY be called if you don't
delete the object, but the CLR doesn't guarantee that finalizers are ever
called either.

-cd
 
A

Arnaud Debaene

Peter said:
Personally, I think it is against the whole
concept of a destructor.
I agree : the point is that there is NO destructors in .NET!!! There are
finalizers, which are a different beast.CLI "destructors" have been mapped
to finalizers as best as MS could (generating code that implement
IDisposable, etc...), but this is by no way a native C++ destructor.
Let me make this clear. I have always realized that when GC wasn't in
play that if I created something (then via 'new') I had to destruct
it manually to avoid memory leaks. That is, it was never gauranteed
the destructor would be called unless I invoked it via a delete call.
But, with the introduction of GC, anything created as a gc object
shouldn't need to be destructed manually, as the application is
suppose to keep track of whether something is being used anymore by
anyone before GC destroys it.
The GC is asynchronous, and your never sure it will execute a finalizer for
a given object.(not the destructor mind you, since it doesn't exist, the
finalizer!).
The other point is that, since you don't know in which order finalizers are
run, you can't reference any external object from within a finalizer, so
you're really very limited in what you can do within them.

The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution compared to
the native, synchronous C++ destructor, IMHO.
What I see emerging is this. GC was created to help with the concept
that destruction of an object is tough to do when who 'owns' it is
unclear, or when it is unclear whether everyone's done with it. This
caused memory leaks in the case that 'nobody' took final
responsibility (or couldn't based on the info available). But the
solution to this is now generating another issue. Lack of reliable
destruction!
Agreed. There is NO destruction in .NET (nor in Java).
Destruction is now not gauranteed at any time you don't
specifically delete it. BUT WAIT! The whole point of GC was to AVOID
having to know when to do delete. So, if we are forced to do delete
to insure the destructor get run, then what did we gain from
introducing GC?
No more memory leaks... The main reason for GC is to avoid raw memory leaks,
not to get a better model for logical destruction of objects.
That is, if we now still have to delete at the right
time, this implies we know the instance is free to be destroyed.
More precisely, we have to *Dispose* the object at the right time...
Thus, we lose the advantage we got. Or more precisely, we have add
complication that introduces more possible pitfalls, and we are STILL
required to tell the application when to destroy something if we want
our destructors have any reliable meaning!

Yep. I do not believe anyway the computer will ever be able to *guess* what
the programmer wants, so there will ever be a manual indication of when an
action must be done (including destruction/finalization/release of
ressource).
And, again possibly showing my ignorance, when did finalizers come
into play? Is this part of the C++ standard?
No, they are part of the .NET standard. They are a very central feature of
..NET, and you should document yourself to get a firm grasp on the subtle
differences between destructors and finalizers.

To make the story short, a finalizer is an optional member function that is
possibly called (if it exists!) by the GC some time before the GC reclaims
the object memory and after the last reference on the object has been
released. You've got no guarantee at all on the order on which finalizers
for different objects execute.
Basically, I think things have gotten so complicated in this
destructor area that we have just traded one set of problems for
another.
Possible. Another explanation is perhaps you didn't master the differences
between finalizers and destructors, and you expected something of the system
without taking care of checking in the documentation wether your
expectations were justified.

Arnaud
MVP - VC

PS : IMHO, the Java, C# and Managed C++ choice of using the C++ destructor
syntax (~ClassName) to express the finalizer is a bad mistake that led many
developpers into misconceptions of that kind.
 
B

Brandon Bray [MSFT]

It seems like the discussion has come to realize the difference between
finalizers and destructors. The first is non-deterministic and loosely
coupled, whereas the later is deterministic.

I do think there is a misunderstanding of the differences between
destructors in managed code and destructors in native code. While there are
differences, the discussion here hasn't highlighted any of them.

Arnaud said:
I agree : the point is that there is NO destructors in .NET!!! There are
finalizers, which are a different beast.CLI "destructors" have been
mapped to finalizers as best as MS could (generating code that implement
IDisposable, etc...), but this is by no way a native C++ destructor.

It's unfortunate that C# decided to use the tilda syntax for finalizers, and
even more unfortunate that the old Managed C++ syntax did the same thing.
However, the CLR makes no mention of destructors... so there's no real
mapping to do. Destructors are a language level implementation, not a
runtime issue.
The whole point of the IDisposable interface is to circumvent this
limitation of the GC, although it is still an inferior solution
compared to the native, synchronous C++ destructor, IMHO.

I'm curious how IDisposable presents an inferior solution. From my
perspective as a language designer, I see IDisposable as the implementation
detail for destructors in C++. Really, you don't have to know anything about
IDisposable to use destructors in C++/CLI. To me, the biggest limitation
imposed on destructors as a result of IDisposable is that all destructors
are public and virtual. I actually that's a good thing, and it's a mistake
that unmanaged C++ allows destructors to be anything else.
Agreed. There is NO destruction in .NET (nor in Java).

The premise of this statement is flawed. Destruction is a language level
service, because only the language can determine when it is appropriate to
deterministically cleanup objects. Why? Because the programmer needs to be
involved - otherwise you deal with the infamous halting problem. The CLR is
a collection of services that can be supplied to a running program. As long
as we're dealing with Turing Machines, the CLR will never be able to provide
deterministic cleanup as a service.

So, that means deterministic cleanup must be moved to the language level.
The best way to accomplish that and maintain a sense of cross-language
functionality was to create a common API. That was IDisposable. From there,
it's a matter of how the languages treat destruction semantics. C++/CLI does
everything that unmanaged C++ does, including automatic creation of
destructors when embedded types have destructors.
No more memory leaks... The main reason for GC is to avoid raw memory
leaks, not to get a better model for logical destruction of objects.

While GC is primarily about memory leaks, I would argue it serves to do much
more. C++ is inherently not type safe because it allows for things like use
of an object after delete. GC in the context of a language like C++ is the
only way to achieve type safety.

Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This means
that memory has a direct correlation to other resources, so GC has the
potential to cleanup a lot more than just memory.

Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a dependency
cycle. The impact of reference counting is well understood, and all of the
practices applied to unmanaged C++ frequently result in fragile programs. In
situations like these, garbage collection is the best solution. The problem
that usually results is programmers don't adapt to a different environment,
and instead try to contort deterministic practices to a non-deterministic
environment.

The short story... writing robust code still requires smart people thinking
solutions all the way through.
 
A

Arnaud Debaene

Brandon said:
It's unfortunate that C# decided to use the tilda syntax for finalizers,
and
even more unfortunate that the old Managed C++ syntax did the same thing.
Well, that's one point on which we agree ;-)
I'm curious how IDisposable presents an inferior solution. From my
perspective as a language designer, I see IDisposable as the
implementation detail for destructors in C++. Really, you don't have
to know anything about IDisposable to use destructors in C++/CLI. To
me, the biggest limitation imposed on destructors as a result of
IDisposable is that all destructors are public and virtual. I
actually that's a good thing, and it's a mistake that unmanaged C++
allows destructors to be anything else.

I was thinking more about C# "raw" implementation of IDisposable (where the
compiler doesn't generate the Dispose method; nor the call to Dipose in
client code), because in this model, it becomes the responsability of the
client of an object to free the internal ressources held by the object, by
calling explicitely Dispose, or through the "using" keyword.
For this point, C++/CLI stack semantic is a huge step in the good direction
WRT to Managed C++ / C#.
The premise of this statement is flawed. Destruction is a language
level service, because only the language can determine when it is
appropriate to deterministically cleanup objects. Why? Because the
programmer needs to be involved - otherwise you deal with the
infamous halting problem. The CLR is a collection of services that
can be supplied to a running program. As long as we're dealing with
Turing Machines, the CLR will never be able to provide deterministic
cleanup as a service.

I agree, but I think we must go a step latter : What is generally called
"destruction" is in fact a 2 parts process :

1) Logical destruction, which correspond to the user code in the
destructor/finalizer function. To be most usefull, this operation should be
synchronous with the release of the last reference to the object (ie, a
stack object goes out of scope, a heap object is not referenced anymore or
is dekleted in native C++), because it allows to implment the RAII idiom and
therefore make it much easier to write exception-safe code.
<troll - well perhaps not THAT troll>I would argue that it's almost
impossible to write a non-trivial exception-safe code without the RAII idiom
</troll>.

2) Resource automatic freeing (mainly memory) , which can be done
automagically and asynchrounously by a GC.

Both native C++ and .NET collapse those 2 distinct operations into one
concept (the destructor or the run-by-the-GC-finalizer), whereas IMHO they
should be more clearly separated. Again, CLI stack semantic with automatic
implementation of IDiposable and automatic call to Dispose is the right
answer IMHO.
So, that means deterministic cleanup must be moved to the language
level. The best way to accomplish that and maintain a sense of
cross-language functionality was to create a common API. That was
IDisposable. From there, it's a matter of how the languages treat
destruction semantics. C++/CLI does everything that unmanaged C++
does, including automatic creation of destructors when embedded types
have destructors.
Yes, is is the intent. I am not sure however that stack semantics can bu
used in all cases to implement the RAII idiom.Well, I suspect one could
declare small "ref struct". The real problem of course is that it is
unusable from C# or VB.NET.

Nonetheless, I see your point about the fact that a common API (IDiposable)
was the best bet to tackle the problem in a language neutral manner. Too bad
that other languages (C#, VB.NET) took the easy and wrong road of letting
the Dispose call responsibility in the client hands.
Anyway, as a C++ bare-to-the-metal-performance-fan (sarcarsm...), I still
regret that IDisposable must go through a virtual call overhead.
While GC is primarily about memory leaks, I would argue it serves to
do much more. C++ is inherently not type safe because it allows for
things like use of an object after delete. GC in the context of a
language like C++ is the only way to achieve type safety.
Well, I would not call "type safety" the danger of dereferencing a dangling
pointer, but I take your point (for me, "type safety" is about the danger of
an incorrect cast that may run unnoticed).
Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This
means that memory has a direct correlation to other resources, so GC
has the potential to cleanup a lot more than just memory.
If you use the IDiposable pattern, yes. The finalizer is much more limited
in what you can do, since you can't reference another object from within a
finalizer. The problem is that most developper know about finalizers (which
they think about as destructors), but don't know about IDisposable, or are
unaware of the stack semantics.
Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a
dependency cycle. The impact of reference counting is well
understood, and all of the practices applied to unmanaged C++
frequently result in fragile programs.
Agreed. Let's say the ideal solution to this problem still remains to be
invented ;-)
The problem that usually
results is programmers don't adapt to a different environment, and
instead try to contort deterministic practices to a non-deterministic
environment.
Yes, but implementors don't make our life easier when they use the same
syntax for finalizers and destructors!
The short story... writing robust code still requires smart people
thinking solutions all the way through.
Amen...

Arnaud
MVP - VC
 
B

Bo Persson

Peter Oliphant said:
/rant on


Let me make this clear. I have always realized that when GC wasn't
in play that if I created something (then via 'new') I had to
destruct it manually to avoid memory leaks. That is, it was never
gauranteed the destructor would be called unless I invoked it via a
delete call. But, with the introduction of GC, anything created as a
gc object shouldn't need to be destructed manually, as the
application is suppose to keep track of whether something is being
used anymore by anyone before GC destroys it. But I always assumed
it would destroy it the same way one would manually destroy it, by
calling it's destructor. Could someone explain to me why NOT calling
the destructor upon GC destroying the object would EVER be a GOOD
thing?

Yes! :)

GC is *not*destroying the object, it is reclaiming the memory space.

The object really lives forever, but its memory space can be reclaimed
when the object cannot be reached anymore.


Bo Persson
 
P

Peter Oliphant

Possible. Another explanation is perhaps you didn't master the differences
between finalizers and destructors, and you expected something of the
system without taking care of checking in the documentation wether your
expectations were justified.

I agree, but there's a problem. You see, how do you know when a change has
been made, or what the new features are, or if something exists that solves
your problem in VS C++.NET? Please don't tell me this info is easily
obtained.

MSDN2. You mean ten's of thousands of pages of doco with an inferior search
engine and everything in alphabetical order? With MSDN2 you have to
basically know the answer to look it up (like a dictionary's weakness, you
have to know how it is spelled to look up how it is spelled). It therefore
becomes a guessing game. What do I suppose MS named this feature? And new
feature get losts in ten's of thousands of pages of doco (and the what is
new area is VERY skimpy).

Another problem is that there is no convention as to what is made into a
'method' and what is made into a 'property'. Often, changing a property is
like a method (i.e., to change visibility change the Visible property, there
is no SetVisible() and SetInvisible() functions, which would of course be
another logical way to do this), and many methods are the equivalent of
properties (they return a state but have no affect). Add the fact that the
stuff is not organized by functionality, and you end up with the situation
thet if you want to be sure you are doing the right thing, you must read
EVERYTHING. Also, MS often leaves old pages up with old info, so one can
even try to look things up and end with dis-information, especially since
the MSDN2 search engine will, without warning, vector you over to the old
MSDN side. And, IMHO, the MSDN2 doco is written by people so well versed in
the subject they seem to forget they actually need to explain it (they
explain it tautologically, ala 'an integer variable stores an integer'). Or
they explain it in a misleading way. For example, there is a page in MSD2
that says the following:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccelng/htm/impdf_37.asp

" A variable declared as enum is an int."

Now, if I said the variable X is an int, you would expect to be able to
store an int in it, yes? But an ENUM variable will return an error if you
try to store an int in it (e.g., enum_var = int(1) is an error). This needs
more explanation, but that is the ENTIRE explanation (look at the link).
Also, look at this page describing the new SerialPort class:

http://msdn2.microsoft.com/en-us/library/system.io.ports.serialport.aspx

Note the detailed description of the sample code for this class. It talks
about how it is a null modem application, and even says you need two systems
to see it in full swing. Only one problem. MS forgot to put the sample code
on the page! Now I've reported this here, reported this in the Feedback
area. Two month later, still no sample code. This would this take, what, 5
minutes, to correct?

And THIS is what I'M suppose to get my knowledge of the VS C++.NET from?
<shiver>

Let's take the point at hand. How was I suppose to find out about these new
rules regarding destructors and finalizers? Why shouldn't I assume that an
UPGRADE would maintain ALL previous functionality and possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE. They should call VS C++.NET
someting like C+++ (3 +'s) to make sure we are clear we need to learn all
its details, since if you assume it will behave like standard C++ you might
find yourself chasing bugs that are actually features!

There is just TOO much info regarding VS C++.NET. This is why when you say:
No, they are part of the .NET standard. They are a very central feature of
.NET, and you should document yourself to get a firm grasp on the subtle
differences between destructors and finalizers.

The reason is not lack of desire or ability, is is lack of knowing such info
exists or that changes were made in the first place. One only 'discovers'
these when code stops working when you do what use to work and now doesn't.
Then the only recourse to sound like a complete buffoon and ask in forums
like this what to do, coming across like a total amateur (even though I have
over 35 years of programming experience).

The real annoyance though comes when you point out a bug in the language and
the response it that it can't be changed because it would 'violate the C++
standard'. How is that even close to a justification when VS C++.NET
violates the standard whenever it sees fit to in most areas. For example,
did you know that if you apply the ToString() method to a Char[] it returns
with the EXACT SAME STRING every time, and it's something like "Char[]". I
reported this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in HELL
depends on this behavior?

Oh well, so it goes...

[==P==]
 
A

Arnaud Debaene

Peter said:
I agree, but there's a problem. You see, how do you know when a
change has been made, or what the new features are, or if something
exists that solves your problem in VS C++.NET? Please don't tell me
this info is easily obtained.

I don't say so. I say that, just as you have learned native C++, you should
learn C++/CLI, most probably in a book or a course if you find MSDN to be
too "dictionnarish" (I agree with that).
Another problem is that there is no convention as to what is made
into a 'method' and what is made into a 'property'. Often, changing a
property is like a method (i.e., to change visibility change the
Visible property, there is no SetVisible() and SetInvisible()
functions, which would of course be another logical way to do this),
and many methods are the equivalent of properties (they return a
state but have no affect).
Well, for me, the difference between function and property is just syntaxic
sugar, they are really the same and I don't see this as a problem.
And, IMHO, the MSDN2 doco is
written by people so well versed in the subject they seem to forget
they actually need to explain it

MSDN is a reference, just as a dictionnary or an encyclopedia. It is not
meant to be a teaching tool (through, IMHO, it do quite a good job as a
teaching tool too, thanks to the many articles beside the mere
classes/functions/properties reference pages, but i agree it can be quite
difficult to find what you are looking for when you are not used to it).
Anyway, have you ever tried Linux man-pages or Oracle 800 pages PDF
reference books before complaining about MSDN ;-)

And THIS is what I'M suppose to get my knowledge of the VS C++.NET
from? <shiver>

Have you looked at MS Press books?

Anyway, MSDN2 was supposed to be a Beta version of the documentation for the
Beta version of Visual 2005 (that's the way I understood it at least).
Now that Visual 2005 is on the shelves, it doesn't seems that Visual 2005
specific stuff has been merged in the "main" MSDN site. I am not sure why
and if the MSDN2 is there to stay as the definitive doc (that would be a bad
idea IMHO to have 2 different "release" MSDN sites), or if we are in a
transitory state.
Let's take the point at hand. How was I suppose to find out about
these new rules regarding destructors and finalizers?
They are no new : It was the same story in Managed C++, through the syntax
was different. The really new thing is the stack semantic.
Why shouldn't I
assume that an UPGRADE would maintain ALL previous functionality and
possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE.
They should call VS C++.NET someting like C+++ (3 +'s) to make sure
we are clear we need to learn all its details, since if you assume it
will behave like standard C++ you might find yourself chasing bugs
that are actually features!
Well, you know, they do call it C++/CLI for a purpose! Anyway, I agree that
MS nomenclature is (as most often) very confusing.
The reason is not lack of desire or ability, is is lack of knowing
such info exists or that changes were made in the first place. One
only 'discovers' these when code stops working when you do what use
to work and now doesn't.
Your only error was to expect that C++/CLI will react exactly as native C++.
There I must say that the MS commercial woodoo about "It Just Work", "simply
compile your old code and see it work like it used to" and the like is
misleading.
The real annoyance though comes when you point out a bug in the
language and the response it that it can't be changed because it
would 'violate the C++ standard'.
Huuu??? What are you speaking about here?
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?

I don't understand you here... ToString is not part of the C++ standard! It
is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :

<quote ECMA 335> "default : System.Object.ToString is equivalent to calling
System.Object.GetType to obtain the System.Type object for the current
instance and then returning the result of calling the System.Object.ToString
implementation for that type.
Note : The value returned includes the full name of the type.
</quote>

and, on the other hand, ToString is not redefined for System.Array.
Therefore, according to usual override rules, the return value is as
expected.

Note : My quotes from the ECMA 335 standard are from the XML file definining
the BCL and available at
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335-xml.zip.
See partition 4 of
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf
for a description of this XML file

Arnaud
MVP - VC
 
C

Carl Daniel [VC++ MVP]

Arnaud said:
Well, I would not call "type safety" the danger of dereferencing a
dangling pointer, but I take your point (for me, "type safety" is
about the danger of an incorrect cast that may run unnoticed).

Surely though they're exactly the same thing. If a C++ pointer refers to an
object that's been deleted, that object no longer has a valid type - or
worse, it may now have a different type! Accessing that deleted object
through a dangling pointer is at it's core no different than using
reinterpret_cast to convert from double to CString (and worse, results in
bugs that are far harder to find).

-cd
 
G

Gerhard Menzl

Brandon said:
To me, the biggest limitation imposed on destructors as a result of
IDisposable is that all destructors are public and virtual. I actually
that's a good thing, and it's a mistake that unmanaged C++ allows
destructors to be anything else.

If you make yourself familiar with the evolution of C++, you will find
that, like most language features, it is a carefully weighed design
decision and not a mistake at all. A destructor needs to to be public if
and only if objects of the class in question are to be destroyed from
outside member functions of the class itself and of its descendants, and
friends. Otherwise, it would be a mistake to make the destructor public.
A destructor needs to be virtual if and only if the class design
requires polymorphic deletion. Otherwise, it would be a mistake to make
the destructor virtual. Public virtual destructors may be appropriate in
many, but by no means in all scenarios. Enforcing it eliminates
perfectly legitimate design options. While Java and, in its wake, .NET
have a restricive tradition where the omniscient platform/language
designer knows better than the lowly programmer, this is very much
against the spirit of Standard C++ (not "unmanaged C++", please; I, and
many others, take offence at this term).
While GC is primarily about memory leaks, I would argue it serves to
do much more. C++ is inherently not type safe because it allows for
things like use of an object after delete. GC in the context of a
language like C++ is the only way to achieve type safety.

What has lifetime management got to do with type safety? It seems to me
that you are mixing up two different issues here. C++/CLI (or any CLI
language, for that matter) allows for using an object after disposing
it. The only difference is that you will screw up at a higher level
(class logic, rather than memory management). If by type safety you mean
that it is impossible for programmers to screw up, then there is no such
thing as type safety.
The premise of this statement is flawed. Destruction is a language
level service, because only the language can determine when it is
appropriate to deterministically cleanup objects. Why? Because the
programmer needs to be involved - otherwise you deal with the infamous
halting problem. The CLR is a collection of services that can be
supplied to a running program. As long as we're dealing with Turing
Machines, the CLR will never be able to provide deterministic
cleanup as a service.

Agreed. However, this is a glaring contradiction to:
Also, if you are truly using Object Oriented Programming, objects will
represent resources like files, network connections, UI, etc. This
means that memory has a direct correlation to other resources, so GC
has the potential to cleanup a lot more than just memory.

There is a growing consensus that GC is basically unsuitable to clean up
scarce resources precisely because it is non-deterministic. The idea of
GC is based on memory as an ample resource. You only can afford to clean
up at indeterminate intervals because there is so much of it, and
because it is more or less uniform. Programs do not normally need to
allocate memory at specific addresses. However, you don't want a
particular socket or a particular mutex to be blocked because the
collector has not run. With finalizers that are not guaranteed to be
called at all, managing scarce resources via GC becomes impossible. GC
is good at managing memory, but it is not the panacea you make it sound
like.
Lastly, deterministic cleanup is really bad at cleaning up in certain
situations. A frequent example is shared resources that form a
dependency cycle.

Again, you are mixing up two entirely different things. It is ordinary
*reference counting*, not deterministic cleanup, that is bad at cleaning
up cycles. The two are not synonymous, the former is a particular
implementation of the latter. Besides, how common are cycles really?
Have you ever encountered an example of a network or file resource
cycle? For the special cases where cycles do occur, there are
well-tested techniques that can deal with them, such as
shared_ptr/weak_ptr. In most resource scenarios, exclusive ownership
suffices, and you don't even need reference counting.
The impact of reference counting is well understood, and all of
the practices applied to unmanaged C++ frequently result in fragile
programs.

In all respect, this is pure FUD. Fragile programs result from sloppy
design. Standard C++ has excellent support for reliable resource
management. As you write youself:
The short story... writing robust code still requires smart people
thinking solutions all the way through.

To which I can only wholeheartedly agree. A collector can relieve you
from the tedious job of caring about memory, but it does not handle all
your resource problems, and it cannot do the thinking for you.

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".
 
P

Peter Oliphant

How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?

I don't understand you here... ToString is not part of the C++ standard!
It is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :

Correct. Since ToString is not part of the C++ standard it doesn't make much
sense to justify broken behavior on its part as being required because it IS
standard behavior! Now, check out this link:

http://lab.msdn.microsoft.com/produ...edbackid=788dfe28-0ac8-4c5f-96c2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in EXACTLY
the following string regardless of the contents of the Char[] :
"System.Char[]". MS claims that they will not 'fix' ToString to solve this,
but instead might create a new function to accomplished more natural and
desired results. They say that changing the current behavior would be
'breaking'. That could only be true if enough people out there wrote code
that RELIES on this behavior. Who would rely on this behavior is what I was
saying in the above? It sounds like just an excuse NOT to change it since
they don't think of it as important enough.

I reported this one, which they claim is also correct:

http://lab.msdn.microsoft.com/produ...edbackid=8e62476e-599a-44cf-81db-d6a026770ad3

That is, the ToString function applied to a single 'char' does not return a
string of just the character, but a string of the decimal ASCII value of
char. That is, '0'.ToString() = "48" instead of "0" since the ascii value
for the character '0' is 48 (or "x30" hex). Personally, I think of the
ToString as a means to convert a variable to a string equivalent of the
*natural symbolic representation* of the variable it's given. I don't think
of the natural symbolic representation of a char to be its ascii value.
Especially since chosing DECIMAL is arbitrary. I could easily make a case
where if you did typically think of a char as its ascii value that it be
represented as a HEX value, not decimal. So this looks to me more like
a -WOOPS! - ToString is taking the char as a byte value and using it that
way. Oh No! Oh well. Let's just call this correct and explain it as being
standard...

Let me put it this way. If it was discovered that the addition operation '+'
produced 3 when trying to add 1 and 1, does it make sense that the proper
way to fix this is to leave 1+1 = 3 but to invent a new sum operator for
which 1+1 =2? Or does it make more sense to fix the CUURENT '+' operator so
it returns the proper sum value? Would it be a valid excuse to say that such
a change would be 'breaking', that is, that it is reasonable to assume some
people wrote code out there counting on the fact that adding 1 and 1 will be
3? analogously, I think they should change ToString to behave in a natural
way, not give excuses as why it won't be fixed...

Now don't get me wrong. I realize it is tough to make such changes since you
don't dare release code with fixes before you check whether or not the 'fix'
hasn't broken something else. But these responses form MS are more of the
nature of denial that there is even a problem, to the point of actually
justifying incorrect behavior as being standard, correct, or that fixing it
would do more harm than good (breaking). That's what is the most frustrating
to me....

[==P==]

Arnaud Debaene said:
Peter said:
I agree, but there's a problem. You see, how do you know when a
change has been made, or what the new features are, or if something
exists that solves your problem in VS C++.NET? Please don't tell me
this info is easily obtained.

I don't say so. I say that, just as you have learned native C++, you
should learn C++/CLI, most probably in a book or a course if you find MSDN
to be too "dictionnarish" (I agree with that).
Another problem is that there is no convention as to what is made
into a 'method' and what is made into a 'property'. Often, changing a
property is like a method (i.e., to change visibility change the
Visible property, there is no SetVisible() and SetInvisible()
functions, which would of course be another logical way to do this),
and many methods are the equivalent of properties (they return a
state but have no affect).
Well, for me, the difference between function and property is just
syntaxic sugar, they are really the same and I don't see this as a
problem.
And, IMHO, the MSDN2 doco is
written by people so well versed in the subject they seem to forget
they actually need to explain it

MSDN is a reference, just as a dictionnary or an encyclopedia. It is not
meant to be a teaching tool (through, IMHO, it do quite a good job as a
teaching tool too, thanks to the many articles beside the mere
classes/functions/properties reference pages, but i agree it can be quite
difficult to find what you are looking for when you are not used to it).
Anyway, have you ever tried Linux man-pages or Oracle 800 pages PDF
reference books before complaining about MSDN ;-)

And THIS is what I'M suppose to get my knowledge of the VS C++.NET
from? <shiver>

Have you looked at MS Press books?

Anyway, MSDN2 was supposed to be a Beta version of the documentation for
the Beta version of Visual 2005 (that's the way I understood it at least).
Now that Visual 2005 is on the shelves, it doesn't seems that Visual 2005
specific stuff has been merged in the "main" MSDN site. I am not sure why
and if the MSDN2 is there to stay as the definitive doc (that would be a
bad idea IMHO to have 2 different "release" MSDN sites), or if we are in a
transitory state.
Let's take the point at hand. How was I suppose to find out about
these new rules regarding destructors and finalizers?
They are no new : It was the same story in Managed C++, through the syntax
was different. The really new thing is the stack semantic.
Why shouldn't I
assume that an UPGRADE would maintain ALL previous functionality and
possibly add onto it.
Changes, IMHO, violate the concept of UPGRADE.
They should call VS C++.NET someting like C+++ (3 +'s) to make sure
we are clear we need to learn all its details, since if you assume it
will behave like standard C++ you might find yourself chasing bugs
that are actually features!
Well, you know, they do call it C++/CLI for a purpose! Anyway, I agree
that MS nomenclature is (as most often) very confusing.
The reason is not lack of desire or ability, is is lack of knowing
such info exists or that changes were made in the first place. One
only 'discovers' these when code stops working when you do what use
to work and now doesn't.
Your only error was to expect that C++/CLI will react exactly as native
C++. There I must say that the MS commercial woodoo about "It Just Work",
"simply compile your old code and see it work like it used to" and the
like is misleading.
The real annoyance though comes when you point out a bug in the
language and the response it that it can't be changed because it
would 'violate the C++ standard'.
Huuu??? What are you speaking about here?
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply the
ToString() method to a Char[] it returns with the EXACT SAME STRING
every time, and it's something like "Char[]". I reported this, and
they said this couldn't bechanged since it would violate the standard
and might break someone's existing code. HUH? Who in HELL depends on
this behavior?

I don't understand you here... ToString is not part of the C++ standard!
It is the .NET standard (ECMA 335) that define the ToString method, and it
seems consistent to me, as this specification says that :

<quote ECMA 335> "default : System.Object.ToString is equivalent to
calling System.Object.GetType to obtain the System.Type object for the
current instance and then returning the result of calling the
System.Object.ToString implementation for that type.
Note : The value returned includes the full name of the type.
</quote>

and, on the other hand, ToString is not redefined for System.Array.
Therefore, according to usual override rules, the return value is as
expected.

Note : My quotes from the ECMA 335 standard are from the XML file
definining the BCL and available at
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335-xml.zip.
See partition 4 of
http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf
for a description of this XML file

Arnaud
MVP - VC
 
N

Nemanja Trifunovic

To me, the biggest limitation
imposed on destructors as a result of IDisposable is that all destructors
are public and virtual. I actually that's a good thing, and it's a mistake
that unmanaged C++ allows destructors to be anything else.


Wow! So you think that concrete types like string or complex should
have virtual destructors? IMHO, that's against the spirit of C++. We
should pay only for what we use.
 
T

Tamas Demjen

Gerhard said:
Have you ever encountered an example of a network or file resource
cycle? For the special cases where cycles do occur, there are
well-tested techniques that can deal with them, such as
shared_ptr/weak_ptr. In most resource scenarios, exclusive ownership
suffices, and you don't even need reference counting.

I completely agree with that. The .NET framework aims to solve the
memory leak problem, and it probably does a good job at that. However,
it doesn't provide the tools needed to solve the deterministic resource
destruction problem, with the exception of C++/CLI. Even that's lacking
good library features, such as shared_ptr/weak_ptr, but they can be
solved by 3rd party vendors.

In C++ we routinely use constructs like vector<Resource>, and when the
container goes out of scope, it guarantees that all owned resources are
properly destructed. In a .NET List<Resource^> it is not happening. The
stack syntax doesn't extend to containers, and there seems to be no
..NET library to implement some kind of a reference counted smart
pointer. And there's no .NET collection that "owns" the resources that
it holds either. Those who routinely wrap unmanaged C++ code in C++/CLI
feel unsafe, especially when the wrapped assemblies are going to be used
in C# or VB.

Why do I care about this when the .NET framework has finalizers? Because
my unmanaged code uses far more memory than the managed part. There is
not enough garbage in the managed memory for the GC to kick in, at least
not before I run out of native memory. If I have to depend on the
finalizer to destroy my unmanaged objects, sooner than later the native
heap will be full.

Some will disagree with me, but this is how I view this issue, and
it concerns me. The .NET framework supports and uses exceptions
extensively, and C++ programmers know how dangerous it is to use
exceptions without proper support for deterministic destructors. It's
like walking on a minefield. I realize that C# has the "using" keyword,
which is the first step toward safe resource destruction, but it only
works for local objects, not for resources stored in a container. When
it comes to objects stored in collections, the .NET framework doesn't
provide a tool better than a vector<Resource*> in native C++. ISO C++
can be error prone, but at least it has the tools to be safe.

The worst thing is that many C# and VB programmers are not used to
dealing with destructors, because they live in a false sense of security
that the GC handles everything automatically, and when this attitude is
mixed with exceptions, it could cause a disaster. Mutexes will not be
unlocked. Files will not be closed. Native memory will not be reclaimed,
because of the lack of destructors that are guaranteed to be called.

Tom
 
T

Tamas Demjen

Peter said:
So, I guess I need to know the rules about destructors. I would have thought
any language derived from C++ would always guarantee the destructor of an
instance of a class be called at some time, especially if created via
[gc]new and stored as a pointer.

Both native C++ and C++/CLI guarantee that the destructor is called
with, and only with, the stack syntax.

{
NativeClass class;
} // ~NativeClass() is guaranteed to be called automatically


{
NativeClass * class = new NativeClass;
} // dynamically allocated classes are not destroyed automatically


To protect dynamically created classes, you use a smart pointer:

{
auto_ptr<NativeClass> class(new NativeClass);
} // Yes, the class is deleted and destroyed automatically


Managed classes have the same behavior in C++/CLI. There is no auto_ptr
for ref classes yet, but it is expected to be available in STL.NET, to
be released soon.

Tom
 
A

Arnaud Debaene

Peter said:
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply
the ToString() method to a Char[] it returns with the EXACT SAME
STRING every time, and it's something like "Char[]". I reported
this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in
HELL depends on this behavior?

I don't understand you here... ToString is not part of the C++
standard! It is the .NET standard (ECMA 335) that define the
ToString method, and it seems consistent to me, as this
specification says that :

Correct. Since ToString is not part of the C++ standard it doesn't
make much sense to justify broken behavior on its part as being
required because it IS standard behavior! Now, check out this link:

Well, ToString is not defined b the C++ standard, but it is defines by
another : The .NET Standard (ECMA 335).
http://lab.msdn.microsoft.com/produ...edbackid=788dfe28-0ac8-4c5f-96c2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in
EXACTLY the following string regardless of the contents of the Char[]
: "System.Char[]". MS claims that they will not 'fix' ToString to
solve this, but instead might create a new function to accomplished
more natural and desired results. They say that changing the current
behavior would be 'breaking'. That could only be true if enough
people out there wrote code that RELIES on this behavior. Who would
rely on this behavior is what I was saying in the above?

I totally agree with the MS response to this query :

- the behaviour you observe is as required by the .NET standard (se my
previous post).

- it *may* break existing code, through unlikely.

- The fact yhat you require or expect a Char[] to act as a string is a sign
that you are still in a "C" way of though. In OOP, a string is an object in
itself, and an char array is just an array, it has nothing to do with
strings. The fact that in C, a string is a char[] is a kludge that has no
reason to be in OOP.
It sounds
like just an excuse NOT to change it since they don't think of it as
important enough.
I actually think it's a good thing *not* to do this change, since it force
people to adapt to the new, better, OOP paradigm, where a string is
represented by System::String and nothing else. You should forget your "C
guru" reflexes in .NET ;-)
Personally, I think of the ToString as a means to convert a variable
to a string equivalent of the *natural symbolic representation* of
the variable it's given.
Yes, but in OOP, the natural symbolic representation of an array of char is
*not* a string, since an array of chars is *not* a string!
I don't think of the natural symbolic
representation of a char to be its ascii value.
There I agree with you : the choice for simple Char is strange.

Arnaud
MVP - VC
 
P

Peter Oliphant

- The fact yhat you require or expect a Char[] to act as a string is a sign
that you are still in a "C" way of though. In OOP, a string is an object in
itself, and an char array is just an array, it has nothing to do with
strings. The fact that in C, a string is a char[] is a kludge that has no
reason to be in OOP.

Maybe. I don't think I'm expecting Char[] to act like a string, but believe
the natural and expected result of applying ToString to it would return a
contatenation of the characters in order in the array as one string.
Likewise, I don't think it's evidence that I think of an 'int' as a string
just because I feel applying the ToString() to an 'int' should return a
string reflecting the decimal value of the integer stored.

But, to the point. Can you think of a good justification as to why, no
matter what the contents of a Char[] are, that applying ToString to it
returns precisely this string and only this string every time:
"System.Char[]"? That is,

Char[] char_array_1 = {'0','1','2'} ;
Char[] char_array_2 = {'A','B','C','D','E','F'} ;

assert( char_array_1.ToString() == "System.Char[]" ) ; // true
assert( char_array_2.ToString() == "System.Char[]" ) ; // true, in fact...
assert( char_array_1.ToString() == char_array_2.ToString() ) ; // true

Therefore, the only reason to apply ToString with respect to Char[] is if
you want to generate the CONSTANT string "System.Char[]". Wouldn't just
establishing a constant string with this value be easier? What good is a
function that no matter what you give it it always returns the same exact
value? Why even HAVE an input parameter? It's like a static function that
isn't static...

I fell this is broken and incorrect behavior. Your mileage may vary... ; )

[==P==]



Arnaud Debaene said:
Peter said:
How is that even close to a
justification when VS C++.NET violates the standard whenever it sees
fit to in most areas. For example, did you know that if you apply
the ToString() method to a Char[] it returns with the EXACT SAME
STRING every time, and it's something like "Char[]". I reported
this, and they said this couldn't bechanged since it would violate
the standard and might break someone's existing code. HUH? Who in
HELL depends on this behavior?

I don't understand you here... ToString is not part of the C++
standard! It is the .NET standard (ECMA 335) that define the
ToString method, and it seems consistent to me, as this
specification says that :

Correct. Since ToString is not part of the C++ standard it doesn't
make much sense to justify broken behavior on its part as being
required because it IS standard behavior! Now, check out this link:

Well, ToString is not defined b the C++ standard, but it is defines by
another : The .NET Standard (ECMA 335).
http://lab.msdn.microsoft.com/produ...edbackid=788dfe28-0ac8-4c5f-96c2-bb5065fd7c2b

Recapping, the ToString() function applied to a Char[] results in
EXACTLY the following string regardless of the contents of the Char[]
: "System.Char[]". MS claims that they will not 'fix' ToString to
solve this, but instead might create a new function to accomplished
more natural and desired results. They say that changing the current
behavior would be 'breaking'. That could only be true if enough
people out there wrote code that RELIES on this behavior. Who would
rely on this behavior is what I was saying in the above?

I totally agree with the MS response to this query :

- the behaviour you observe is as required by the .NET standard (se my
previous post).

- it *may* break existing code, through unlikely.

- The fact yhat you require or expect a Char[] to act as a string is a
sign that you are still in a "C" way of though. In OOP, a string is an
object in itself, and an char array is just an array, it has nothing to do
with strings. The fact that in C, a string is a char[] is a kludge that
has no reason to be in OOP.
It sounds
like just an excuse NOT to change it since they don't think of it as
important enough.
I actually think it's a good thing *not* to do this change, since it force
people to adapt to the new, better, OOP paradigm, where a string is
represented by System::String and nothing else. You should forget your "C
guru" reflexes in .NET ;-)
Personally, I think of the ToString as a means to convert a variable
to a string equivalent of the *natural symbolic representation* of
the variable it's given.
Yes, but in OOP, the natural symbolic representation of an array of char
is *not* a string, since an array of chars is *not* a string!
I don't think of the natural symbolic
representation of a char to be its ascii value.
There I agree with you : the choice for simple Char is strange.

Arnaud
MVP - VC
 
B

Brandon Bray [MSFT]

Gerhard said:
If you make yourself familiar with the evolution of C++, you will find
that, like most language features, it is a carefully weighed design
decision and not a mistake at all.

This, unfortunately, will have to be a position that we agree to disagree.
Many of the design decisions within C++ are based on historical precedent
and strict maintenance to a notion of backwards source compatibility. While
I understand many of the design decisions, I do consider many of them
mistakes in hindsight.

And while the flexibility afforded by allowing destructors to be non-public
can be convenient, it demonstrates a classic misuse in my mind... it would
be far more effective to introduce a real language feature that allowed the
desired behaviors. Because there is so much flexibility (and in my view,
misuse) of some features, it hampers the ability to do rigid analysis of the
program from both a human and automated perspective.
What has lifetime management got to do with type safety?

Everything. I speak of type safety from the "I can prove the program
mathematically obeys the separation of objects" perspective. Because an
object deleted frees memory, it allows the programmer to allocate another
object in the memory that the pointer still points to. That is what type
safety is meant to eradicate. Clearly, we all know using a pointer after the
object to which it points is deleted is a programming error... but so is a
buffer overrun. The language is not type safe unless it can rigidly prevent
that.
There is a growing consensus that GC is basically unsuitable to clean up
scarce resources precisely because it is non-deterministic.

While, I'm mostly in agreement... fifty years ago, there was growing
consensus that GC was too expensive for memory. The state of the art GC
works very well for memory today. The best that is available for other
resources only does a 1-to-1 mapping to memory... which probably isn't the
most efficient way to manage scarce resources. There's still ample amount of
research to be done in this area.
Again, you are mixing up two entirely different things. It is ordinary
*reference counting*, not deterministic cleanup, that is bad at cleaning
up cycles.

I'm going to clearly state that I am not mixing up these things... I spend a
lot of time in the design issues here, so you can at least note that I do
know something. I stand behind what I said. Shared resources and
deterministic cleanup have very few options, and reference counting is by
far the most commonly used. If there are others, they aren't well formalized
and certainly not tied to a language level service.
In all respect, this is pure FUD. Fragile programs result from sloppy
design. Standard C++ has excellent support for reliable resource
management.

Unfortunately, I have to disagree again. Maintanence of applications that
grow to millions of lines of code is dramatically more expensive because
"techniques" are not good enough for automated proofs of program
correctness. And while Standard C++ has good support for certain ways to
reliably manage resources, it fails miserably in other areas. (Note, .NET
has similar issues -- it does incredibly well in some cases, and fails in
others.)

I recognize that this discussion really has very few options than to diverge
towards over-heated arguments. I don't really have much more to say, but I
do appreciate your candor and your passion for C++.

Cheers!
 
B

Brandon Bray [MSFT]

Nemanja said:
Wow! So you think that concrete types like string or complex should
have virtual destructors? IMHO, that's against the spirit of C++. We
should pay only for what we use.

I don't think virtual destructors and pay-for-what-you-use are incompatible.
The context in which C++ is built today makes them antagonistic to each
other, but that's just a design decision.

For instance, value types in C++/CLI have virtual functions, but don't have
any overhead for calling them because the class is sealed. Introducing whole
program analysis to C++ would also detangle the two desires.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top