Exception management question...

C

craig

Thanks for the detailed exlplaination. I printed it out so that I can study
it.

Let me ask you this...

There is code in my app that follows this general pattern:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

}



I see this as a problem, because it has the potential to swallow a logic
error in which an invalid ObjectID parameter is passed resulting in the
GetObject method returning null. Would it be better to rewrite this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

myObject.Name = Name;

}


This allows an exception to be thrown with the next line of code attempts to
set the property of a null object reference. Or would it be better to write
this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

else

{

throw new InvalidArgumentException("ObjectID is invalid");

}

}


Thanks again for your input!!
 
R

Rachel Suddeth

I'm not Dave, but I do have an opinion on this (judge for yourself if it
makes sense..).

craig said:
Thanks for the detailed exlplaination. I printed it out so that I can
study it.

Let me ask you this...

There is code in my app that follows this general pattern:

Private void UpdateName(Guid ObjectID, String Name)
{
object myObject = GetObject(ObjectID)
if(myObject != null)
{
myObject.Name = Name;
}
}

I see this as a problem, because it has the potential to swallow a logic
error in which an invalid ObjectID parameter is passed resulting in the
GetObject method returning null. ...

Yes, there is this potential...
But sometimes in a world built around responding to events,
you could also have cases where a method could get called with null
parameters and you are just supposed to ignore the call if that happens.
Do you know if there was a reason that check was put in?

Now about the exception handling.... you do not want to do the first
thing. Clearly UpdateName is in a better position to give useful
information in the exception string than GetObject. That is,
UpdateName can tell you the NAME of the object you were
trying to update! All GetObject knows is that somebody passed it
a null parameter. If you want GetObject to raise the exception, you
should catch it, and throw again after adding to it. However, the
latter approach is probably better... probably there is no need to
involve GetObject. You already know the problem is a null
ObjectID.


Would it be better to rewrite this as:
 
J

Jay B. Harlow [MVP - Outlook]

Craig,
I see this as a problem, because it has the potential to swallow a logic
error in which an invalid ObjectID parameter is passed resulting in the
GetObject method returning null.
I would expect GetObject itself to throw the exception based on the bad
parameter!


If GetObject can return "null objects", then I would consider having
GetObject return a Null Object.

A Null Object is an implementation of the Special Case Pattern.

http://martinfowler.com/eaaCatalog/specialCase.html

Hope this helps
Jay



craig said:
Thanks for the detailed exlplaination. I printed it out so that I can
study it.

Let me ask you this...

There is code in my app that follows this general pattern:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

}



I see this as a problem, because it has the potential to swallow a logic
error in which an invalid ObjectID parameter is passed resulting in the
GetObject method returning null. Would it be better to rewrite this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

myObject.Name = Name;

}


This allows an exception to be thrown with the next line of code attempts
to set the property of a null object reference. Or would it be better to
write this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

else

{

throw new InvalidArgumentException("ObjectID is invalid");

}

}


Thanks again for your input!!
<<snip>>
 
D

David Levine

craig said:
Thanks for the detailed exlplaination. I printed it out so that I can
study it.

Let me ask you this...

There is code in my app that follows this general pattern:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

}



I see this as a problem, because it has the potential to swallow a logic
error in which an invalid ObjectID parameter is passed resulting in the
GetObject method returning null. Would it be better to rewrite this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

myObject.Name = Name;

}


This allows an exception to be thrown with the next line of code attempts
to set the property of a null object reference. Or would it be better to
write this as:

Private void UpdateName(Guid ObjectID, String Name)

{

object myObject = GetObject(ObjectID)

if(myObject != null)

{

myObject.Name = Name;

}

else

{

throw new InvalidArgumentException("ObjectID is invalid");

}

}


Thanks again for your input!!
 
D

David Levine

Rachel and Jay made some good points. Let me give my two (or more) cents..

I see this as a problem with several parts.

The first is whether or not the routine should perform parameter validation
before processing the values. If this is an internal routine then other
routines at the class boundary should perform complete parameter validation
before handing it off to the internal routines. In general you should
centralize the validation logic for two reasons - one, it's a lot easier to
test and maintain and ensures coherency, and two, this means that internal
routines don't have to burn CPU cycles performing the same checking. If an
argument fails the validation check then it is acceptable to throw an
exception that describes the causes of the failure.

The second aspect of the problem is determining what constitutes a valid
value. In some cases null might be perfectly acceptable and in others it is
invalid. Generally speaking there is no such thing as a single value that
for all classes and uses will always represent an invalid value - it depends
on what you are doing. For the ObjectID there might be more then one value
that represents an invalid value; for example null, and an empty guid might
both be invalid. There may be others, such as a predefined sentinel value
that represents some kind of special case, such as the last ObjectID in the
system.

Another aspect is what does it mean to return a null object from GetObject?
Is this a valid case? For some systems it may be that if the objectID passes
parameter validation then it is a violation of some internal contract for
the call to GetObject to return a null object - in this case it should throw
an exception indicating an internal error. For other systems it may be
"normal" for a record to not be found - in this case you would not want to
throw an exception, instead there should be a code path that is correct for
that case.

So I don't have a single answer...it depends on the design and intent of the
system.

That being said, if you do want to validate parameters in this method I
would do so before calling GetObject, and if it fails, throw an exception
immediately. As I said earlier, because it is a private method I would
actually expect validation to occur somewhere else.

In the example you provide I would not let an exception be thrown just
because you attempted to set the property of a null reference; this will
generate a misleading exception message to the user (telling your user his
program crashed due to an access violation will not win many friends).
Again, if it is expected that some records will not be found then I would
make a null return object part of a normal code path. However, if this
represents a violation of an internal contract then I suppose in that case
it is not something I would test. Ideally this sort of failure should be
detected and dealt with in a different layer, probably in the GetObject
method itself.

Hope this helps,
Dave
 
C

craig

I can't thank all of you enough for taking the time to weigh in on this
issue. It blows my mind how much thought must go into what might at first
appear to be some very simple logic.

Rachel's point about methods being called with null parameters simply
because of the way events are firing is very relevant. This happens to me
all the time. I often find methods being called at seemingly indeterminate
times because of the way events are fired which always raises the question:
does this indeterminate firing of events constitute a logic error which
needs to be sorted out and corrected, or should I just design the method to
swallow the null parameter case in order to be able to accomodate this
behavior without failure??? I often wonder how other developers handle this
issue.

I agree with your thoughts regarding the case in which an attempt to set a
property on a null reference throws an exception. The exception would not
be relevant to the calling routine.

Based in your responses, it sounds as though there are no clearly-defined
patterns. Different developers might handle this situation in different
ways. It is not easy to know which is the most robust. However, I am not
familiar with the concept of a common parameter validation routine for use
with private methods. Do you mean to define methods used specifically for
validating each parameter which are called prior to calling the private
methods in which the parameters are used?
 
D

David Levine

craig said:
I can't thank all of you enough for taking the time to weigh in on this
issue. It blows my mind how much thought must go into what might at first
appear to be some very simple logic.

Even the simplest of things is not simple at all. When looking at my code I
usually take the attitude that
every line of code is flawed or will be flawed. Doesn't necessarily mean
there's a bug lurking there (though all too often there is), but as systems
evolve and environments change even the most inocuous statements will be
revealed as containing a weakness or which can be improved. Doesn't mean you
want to churn the code base for minor improvements; it means to always think
about what could go wrong.

This is especially true for .NET code since it's possible for literally
almost every line of code to be capable of causing an exception.
Rachel's point about methods being called with null parameters simply
because of the way events are firing is very relevant. This happens to me
all the time. I often find methods being called at seemingly
indeterminate times because of the way events are fired which always
raises the question: does this indeterminate firing of events constitute a
logic error which needs to be sorted out and corrected, or should I just
design the method to swallow the null parameter case in order to be able
to accomodate this behavior without failure??? I often wonder how other
developers handle this issue.

Events are a little special in that they often contain arguments which are
not always required by the invoked event, in which case one or more null
arguments can be ignored. However, they should never be fired
indeterminately (randomly?)...it may our understanding of them is incomplete
but the behavior of the system should be predictable. There is no magic,
only a machine behind that curtain...
Based in your responses, it sounds as though there are no clearly-defined
patterns. Different developers might handle this situation in different
ways. It is not easy to know which is the most robust. However, I am not
familiar with the concept of a common parameter validation routine for use
with private methods. Do you mean to define methods used specifically for
validating each parameter which are called prior to calling the private
methods in which the parameters are used?
I am not aware of a single pattern that covers all the cases.
The basic idea is simple - validate the data once at the input point to the
system so that internal objects and routines can assume the data has already
been validated. Data from the outside world cannot be trusted to be valid (a
hacker may be prodding the system) but once inside it is considered to be
trusted. How you actually implement this depends on the complexity of the
validation logic - simple things should be done inline, complex things
should be separated out as much as is necessary.

You can put validation logic in each routine, and that's not necessarily a
bad thing, but often it results in code bloat, burns CPU cycles, and spreads
the validation logic all over the source code, making it difficult to
maintain. The advantage of is that makes your code more bullet proof, even
from internal errors. In a mission critical system where robustness is more
important than shaving a few processing cycles this might be a valid
approach.
 
R

Rachel Suddeth

David Levine said:
. . . This is especially true for .NET code since it's possible for
literally almost every line of code to be capable of causing an exception.


Events are a little special . . . However, they should never be fired
indeterminately (randomly?)...it may our understanding of them is
incomplete but the behavior of the system should be predictable. There is
no magic, only a machine behind that curtain...

Sometimes I think that events are only not random in the same sense that
nothing is ever random. Everything behaves the way it does for some reason,
according to some set of rules (pysical laws, whatever), but when a stystem
is so complex that the behavior of a particular event is unpredictable, then
it is considered random. I'm sure it is just my frustration at an
unproductive day, but doesn't it seem that the monstrous set of events that
comes built in with our .NET is approaching that level of complexity? I
wonder if it's really simpler and easier to learn than reading a
straightforward message loop...

But you are right, if we think about it well enough, I think we can always
solve our problems. If only our bosses wouldn't give us these darned
annoying deadlines that never give us enough time to think things through!!
. . .
The basic idea is simple - validate the data once at the input point to
the system so that internal objects and routines can assume the data has
already been validated. Data from the outside world cannot be trusted to
be valid (a hacker may be prodding the system) but once inside it is
considered to be trusted.

I cut my last message short in the hopes someone smarter than me would get
to what I was going to take way too long trying to say. Thanks for not
disappointing me :)
You can put validation logic in each routine, and that's not necessarily a
bad thing, but often it results in code bloat, . . .

and that is where exceptions come in. If you don't force validation in every
interior function, you could let exceptions go, they will help to catch
those times where you forget to validate something you should've. Ugh, that
contradicts my previous response doesn't it? You see why no one can agree on
how to do error handling? I can't even agree with myself. They key is to
find a system that will do these two things.
1) It will not send the user multiple messages about the same error.
2) When it does send messages to the user, they will mean something (that
is, they will not say "object not set to an instance." That not only means
nothing to the user, it will mean nothing to the programmer if the user
reports it. He won't know which of the 65,482 objects in his program wasn't
set to an instance, nor will he even know where it wasn't set because if I'm
not mistaken, the user doesn't get the stack trace in production code.)
 
D

David Levine

Rachel Suddeth said:
<sigh> and yet we're supposed to know at all times what exceptions can be
thrown...

I actually don't make the assumption that I know which *specific* exceptions
will be thrown, only that *an* exception can be thrown. I try to structure
the subsystems so that I know the path that exceptions will take as it
propagates through the system. There are situations where I will catch
specific exceptions and try to handle them but I've found that there are
more exceptions I did not anticipate then there are that I can account for.
Sometimes I think that events are only not random in the same sense that
nothing is ever random. Everything behaves the way it does for some
reason, according to some set of rules (pysical laws, whatever), but when
a stystem is so complex that the behavior of a particular event is
unpredictable, then it is considered random.

I quite agree. Chaos theory rules...
I'm sure it is just my frustration at an unproductive day, but doesn't it
seem that the monstrous set of events that comes built in with our .NET is
approaching that level of complexity? I wonder if it's really simpler and
easier to learn than reading a straightforward message loop...

I don't see this as a .NET problem but more as a system problem. The
combination of .NET, the COM/C++ libs it is built on, and the windowing
system below it, is incredibly complex. Even a straight message loop really
isn't so straight when you trace the passage of an event all the way from
the hardware device that originally generated the event to the trap handler
to the interrupt handler in the device driver, through the kernel, up to
user mode, up to the windows subsystem, etc.

and that is where exceptions come in. If you don't force validation in
every interior function, you could let exceptions go, they will help to
catch those times where you forget to validate something you should've.
Ugh, that contradicts my previous response doesn't it? You see why no one
can agree on how to do error handling? I can't even agree with myself.

And that's the rub. No matter how well written the code there are always
unforseen conditions that the code did not anticipate; foolproof code isn't
proof against all fools.

There's an inherent tension between the desire to centralize code and
eliminate duplication versus bullet-proofing every method that performs
sensitive operations. There isn't a single design that will work for all
systems - the requirements of the system should drive it, not some ivory
tower notion of the proper way of writing code. In mission critical software
I've written a lot of self-defensive code to guard against internal errors,
and sure enough those errors happen. It might only catch a one-in-a-million
bug, and perhaps I'm too paranoid, but I'd rather err on the side of safety
rather then assume the best. I think this matters more the more that the
code interacts with outside systems.


They key is to find a system that will do these two things.
1) It will not send the user multiple messages about the same error.
2) When it does send messages to the user, they will mean something (that
is, they will not say "object not set to an instance." That not only means
nothing to the user, it will mean nothing to the programmer if the user
reports it.

I agree! The problem is not that there is no information but that the
information is either misleading or so incomplete that it is very difficult
to do much with. I really don't want to have to do a core dump, hook up
windbg and trace through system data structures just to determine that an
array index was off by 1!

In my exception management layer I wrote a PublishOnCreate method which
determines if the exception is being thrown for the first time or is being
wrapped and rethrown - it publishes it at the initial throw site and not at
the intermediate sites. Then at the final handler I publish it again - this
captures the initial exception and also the final disposition at the module
boundary, including all the context information that had been added. I do
this double-publish to ensure I have a record even if an exception is
accidently swallowed and ignored.

I haven't worked out my strategy yet for Whidbey. There are some new events
that you can subscribe to that get fired whenever an exception is thrown at
all - this seems like a promising avenue to use for ensuring that exceptions
do not get dropped or lost, but it also has the potential for swamping a
system.

He won't know which of the 65,482 objects in his program wasn't set to an
instance, nor will he even know where it wasn't set because if I'm not
mistaken, the user doesn't get the stack trace in production code.)

Hmmm, it may not be able to provide lines numbers and source file names, but
a stack trace should always be available.
 
C

craig

David Levine said:
I actually don't make the assumption that I know which *specific*
exceptions will be thrown, only that *an* exception can be thrown. I try
to structure the subsystems so that I know the path that exceptions will
take as it propagates through the system. There are situations where I
will catch specific exceptions and try to handle them but I've found that
there are more exceptions I did not anticipate then there are that I can
account for.


I quite agree. Chaos theory rules...


I don't see this as a .NET problem but more as a system problem. The
combination of .NET, the COM/C++ libs it is built on, and the windowing
system below it, is incredibly complex. Even a straight message loop
really isn't so straight when you trace the passage of an event all the
way from the hardware device that originally generated the event to the
trap handler to the interrupt handler in the device driver, through the
kernel, up to user mode, up to the windows subsystem, etc.



And that's the rub. No matter how well written the code there are always
unforseen conditions that the code did not anticipate; foolproof code
isn't proof against all fools.

There's an inherent tension between the desire to centralize code and
eliminate duplication versus bullet-proofing every method that performs
sensitive operations. There isn't a single design that will work for all
systems - the requirements of the system should drive it, not some ivory
tower notion of the proper way of writing code. In mission critical
software I've written a lot of self-defensive code to guard against
internal errors, and sure enough those errors happen. It might only catch
a one-in-a-million bug, and perhaps I'm too paranoid, but I'd rather err
on the side of safety rather then assume the best. I think this matters
more the more that the code interacts with outside systems.

This is my struggle as well. I realize that I can validate all of the
incoming data at one time therefore allowing all private methods to assume
that this data is valid. The problem with this, however, is that potential
internal logic errors might still result in invalid parameters being passed
to private methods. These logic errors might be easier to track down if all
of the private methods also validate parameters as well.
 
C

craig

I also thought I would mention the following...

One of my very favorite books on the .NET Framework is Jeffrey Richter's
"Applied Microsoft .NET Framework Programming." I just noticed that he has
an excellent chapter on exception management stratgey which includes many of
the concpets that have been discussed here. I would highly recommend this
book. I hope that he is planning an update for the 2.0 framework.
 
R

Rachel Suddeth

David Levine said:
I don't see this as a .NET problem but more as a system problem.. .

Funny, I just thought that shortly after sending the previous message. Part
of the issue is I'm new to Windows programming. Although I've read Windows
code (straight C versions of it), and done a couple of kindergarten
utilities in VB, I've only been doing real work in it for about 6 months.
Previously I worked [in tools and] with embedded systems programming on
special operating system that was probably much more simple than Windows.
Although the message loops looked similar, I think one can get a much
greater variety of messages, and from more different sources, AND you're
expected to deal with it with less training - which is why they hide the
details, eh?
. . . In mission critical software I've written a lot of self-defensive
code to guard against internal errors, and sure enough those errors
happen. It might only catch a one-in-a-million bug, and perhaps I'm too
paranoid,

Probably not. If a mistake or oversight could bring down a drawbrige on top
of a boat, or shut down a power grid, or cause a missile to hit the wrong
target, or make your customer loose 1/2 million on the stock market, then
you can't be too careful, and you expect most of your code to be error
handling. In a complex system, there are an almost infinite number of errors
that can happen, so the one in a million things occasionally do.

Personally, I am enjoying working in an environment where poorly handled
errors don't cause any actual disasters :) [Although from talking to some
of our clients, you would think wasting a ream of paper was a disaster...]
. . .
I haven't worked out my strategy yet for Whidbey.

For what? Oh dear, I'm missing things again. Out of the loop as usual...
Hmmm, it may not be able to provide lines numbers and source file names,
but a stack trace should always be available.

Well, that's good news. I guess I'll just have to figure out how to look at
it - as I said I'm new at this. If one can have the function names in the
order they were called, I think one can always figure out what happened.
 
D

David Levine

Funny, I just thought that shortly after sending the previous message.
Part of the issue is I'm new to Windows programming. Although I've read
Windows code (straight C versions of it), and done a couple of
kindergarten utilities in VB, I've only been doing real work in it for
about 6 months. Previously I worked [in tools and] with embedded systems
programming on special operating system that was probably much more simple
than Windows. Although the message loops looked similar, I think one can
get a much greater variety of messages, and from more different sources,
AND you're expected to deal with it with less training - which is why they
hide the details, eh?

The details of what's going on behind the scenese are hideously complex and
full of special cases. One big problem is that MS tries to be as backwardly
compatible as possible, and this means being compatible with some or all of
the stupid programmer tricks that were done 15 years ago. Some things are
done to be compatible with some original DOS code, early Win2.x and 3.x
code, etc. Nasty stuff. It's gotten a lot better but there's a lot of
baggage there. There are still people using DDE (Dynamic Data Exchange,
perhaps the worst communications protocol ever invented).

There are good reasons why they are promoting .NET - it's much easier to
learn to write good .NET code then it is to learn to write good Windows
code, either raw windows or using COM, MFC, or some other framework. It's
more consistent, both internally and externally, and you don't have to be
aware of the same amount of details. The amount of background "noise" that
must be in your head at all times when writing .net code is a lot less then
the amout of noise it takes to write vanilla windows code.

The good news is that the need to know what going on in windows behind the
scenes is decreasing, but it is still greater then zero. It would take
years and years to learn to write C windows programs (I've done it), and
there really is no need to do that.
For what? Oh dear, I'm missing things again. Out of the loop as usual...

Whidbey what MSFT is calling their next release of .NET 2.0. It's got lots
of new features.
Well, that's good news. I guess I'll just have to figure out how to look
at it - as I said I'm new at this. If one can have the function names in
the order they were called, I think one can always figure out what
happened.

If only it were that easy :)
 
R

Rachel Suddeth

David Levine said:
The details of what's going on behind the scenese are hideously complex
and full of special cases.

Aren't they always? I remember a few years ago reading articles about how
skilled programmers would no longer be needed - there would be tools that
almost anyone could use to get their programming needs met. What nonsense.
The more people can do for themselves, the more they expect from us, and our
jobs never get much easier (good thing.)
. . .
There are good reasons why they are promoting .NET - it's much easier to
learn to write good .NET code then it is to learn to write good Windows
code, either raw windows or using COM, MFC, or some other framework.. . .

That I do believe. I really like C#, and most days I'm happy to work with
it. While I felt like I understood more by reading C windows code, it was
3.x (simpler back then I'm sure), and it's always easier to read than to
write. And even given that, my few attempts at reading MFC code left me
hopelessly confused ... I sincerely hope never to have to write any of it
:)
If only it were that easy :)
Sh... don't tell me that. "Always" is probably too big a word for a newbie,
but in 6 years of programming, I've never had a problem I couldn't figure
out when I had a stack trace. And I have worked with some pretty complex
code (multi-process, of the infamous million+ line-of-code variety). Still,
if I tell you it could never happen it surely will tomorrow, so I guess I'd
better keep my mouth shut :cool:
 
C

craig

I realize that there hasn't been any activity in this thread for a while. However, after making some modifications to my application based upon input that I received in this thread, I have noticed an interesting exception management-related behavior that I thought I would describe in order to try to get some of your thoughts.

Based upon the article in the June 2004 issue of MSDN magazine by Jason Clark on exception management, http://msdn.microsoft.com/msdnmag/issues/04/06/NET/, I added the following event handlers to my application's Main() method:

1. AppDomain.CurrentDomain.UnhandledException (for CLR unhandled exceptions)
2. Application.ThreadException (for windows forms unhandled exceptions)

Since I have dome this, however, I have noticed the following behavior: when exceptions are thrown from one form but handled from a different form, they end up being handled by the Application.ThreadException event handler rather than the intended exception handler on the other form. For example:

Consider that FormA attempts to launch FormB using the following code pattern, with a sample exception handler:

private void LaunchFormB()
{
FormB formB = new FormB();
try
{
formB.Show();
}
catch(SecurityException ex)
{
MessageBox.Show("The authenticated user does not have permission to view formB");
}
}

Now, with the global exception handlers in place, if a security exception were thrown from within the FormB_Load() event handler, it will be handled from within the Application.ThreadException event handler rather than the catch block above. This causes the application to shut down, which is not the desired behavior.

Does this mean that when the global exception handler, Application.ThreadException, is in effect, no form can raise an exception that is handled by a different form??? Is the code above considered bad design for this reason???

Thanks for any thoughts!!!
 
D

David Levine

I think the reason is a little different. I believe (though I have no direct proof of this other then observations I have made) that the windows.form class implements its own exception filter and handler around the entire form.

When the exception filter sees an exception it means that the application did not catch it, so in terms of the form, it is unhandled. It then checks to see if anyone has subscribed to the Application.ThreadException event, and if so, it catches the exception and fires the event. If not, it allows the exception to continue up the call stack. I believe you are seeing these behaviors because of this, and it makes a difference if you subscribe to the ThreadException prior to creating FormA versus subscribing to the event from within FormA.

If you register the Application.ThreadException and AppDomain.UnhandledException in your application's main I get similar (but not identical) behavior to what you describe. However, rather then use static methods that are registered in your application's Main routine, try making them instance methods of FormA. When I did that the catch handler in FormA always got the exception and neither UE handler ever saw it.

IMO the current implemenation of Application.ThreadException ought to be revised.

I realize that there hasn't been any activity in this thread for a while. However, after making some modifications to my application based upon input that I received in this thread, I have noticed an interesting exception management-related behavior that I thought I would describe in order to try to get some of your thoughts.

Based upon the article in the June 2004 issue of MSDN magazine by Jason Clark on exception management, http://msdn.microsoft.com/msdnmag/issues/04/06/NET/, I added the following event handlers to my application's Main() method:

1. AppDomain.CurrentDomain.UnhandledException (for CLR unhandled exceptions)
2. Application.ThreadException (for windows forms unhandled exceptions)

Since I have dome this, however, I have noticed the following behavior: when exceptions are thrown from one form but handled from a different form, they end up being handled by the Application.ThreadException event handler rather than the intended exception handler on the other form. For example:

Consider that FormA attempts to launch FormB using the following code pattern, with a sample exception handler:

private void LaunchFormB()
{
FormB formB = new FormB();
try
{
formB.Show();
}
catch(SecurityException ex)
{
MessageBox.Show("The authenticated user does not have permission to view formB");
}
}

Now, with the global exception handlers in place, if a security exception were thrown from within the FormB_Load() event handler, it will be handled from within the Application.ThreadException event handler rather than the catch block above. This causes the application to shut down, which is not the desired behavior.

Does this mean that when the global exception handler, Application.ThreadException, is in effect, no form can raise an exception that is handled by a different form??? Is the code above considered bad design for this reason???

Thanks for any thoughts!!!
 
C

craig

Thanks for the info, David. Sounds like you have spent alot of time tracking exception behavior relative to windows forms.

In your apps, do you often raise exceptions on one form, and then handle them on a different form?
I think the reason is a little different. I believe (though I have no direct proof of this other then observations I have made) that the windows.form class implements its own exception filter and handler around the entire form.

When the exception filter sees an exception it means that the application did not catch it, so in terms of the form, it is unhandled. It then checks to see if anyone has subscribed to the Application.ThreadException event, and if so, it catches the exception and fires the event. If not, it allows the exception to continue up the call stack. I believe you are seeing these behaviors because of this, and it makes a difference if you subscribe to the ThreadException prior to creating FormA versus subscribing to the event from within FormA.

If you register the Application.ThreadException and AppDomain.UnhandledException in your application's main I get similar (but not identical) behavior to what you describe. However, rather then use static methods that are registered in your application's Main routine, try making them instance methods of FormA. When I did that the catch handler in FormA always got the exception and neither UE handler ever saw it.

IMO the current implemenation of Application.ThreadException ought to be revised.

I realize that there hasn't been any activity in this thread for a while. However, after making some modifications to my application based upon input that I received in this thread, I have noticed an interesting exception management-related behavior that I thought I would describe in order to try to get some of your thoughts.

Based upon the article in the June 2004 issue of MSDN magazine by Jason Clark on exception management, http://msdn.microsoft.com/msdnmag/issues/04/06/NET/, I added the following event handlers to my application's Main() method:

1. AppDomain.CurrentDomain.UnhandledException (for CLR unhandled exceptions)
2. Application.ThreadException (for windows forms unhandled exceptions)

Since I have dome this, however, I have noticed the following behavior: when exceptions are thrown from one form but handled from a different form, they end up being handled by the Application.ThreadException event handler rather than the intended exception handler on the other form. For example:

Consider that FormA attempts to launch FormB using the following code pattern, with a sample exception handler:

private void LaunchFormB()
{
FormB formB = new FormB();
try
{
formB.Show();
}
catch(SecurityException ex)
{
MessageBox.Show("The authenticated user does not have permission to view formB");
}
}

Now, with the global exception handlers in place, if a security exception were thrown from within the FormB_Load() event handler, it will be handled from within the Application.ThreadException event handler rather than the catch block above. This causes the application to shut down, which is not the desired behavior.

Does this mean that when the global exception handler, Application.ThreadException, is in effect, no form can raise an exception that is handled by a different form??? Is the code above considered bad design for this reason???

Thanks for any thoughts!!!
 
J

Jay B. Harlow [MVP - Outlook]

Craig,
The Application.ThreadException is raised as part of the Application.Run
method. The Application.Run method is the "message pump" that processes all
of the Win32 windows messages "converting" them into their respective
Windows Forms events.

I suspect Form.Load is the result of receiving the WM_CREATE Win32 windows
message.

In other words: Form.Show causes a Win32 window to be created which causes
the window to receive the WM_CREATE message, which gets "converted" into the
Form Load event. Seeing as the Application.Run method is dispatching the
WM_CREATE message, the exception in the Form Load event causes the
Application.ThreadException event.

Looking at
http://msdn.microsoft.com/netframew...ull=/library/en-us/dndotnet/html/win32map.asp &
http://www.pinvoke.net/

Form.Show effectively does a Win32 ShowWindow.

Hope this helps
Jay


I realize that there hasn't been any activity in this thread for a while.
However, after making some modifications to my application based upon input
that I received in this thread, I have noticed an interesting exception
management-related behavior that I thought I would describe in order to try
to get some of your thoughts.

Based upon the article in the June 2004 issue of MSDN magazine by Jason
Clark on exception management,
http://msdn.microsoft.com/msdnmag/issues/04/06/NET/, I added the following
event handlers to my application's Main() method:

1. AppDomain.CurrentDomain.UnhandledException (for CLR unhandled
exceptions)
2. Application.ThreadException (for windows forms unhandled exceptions)

Since I have dome this, however, I have noticed the following behavior:
when exceptions are thrown from one form but handled from a different form,
they end up being handled by the Application.ThreadException event handler
rather than the intended exception handler on the other form. For example:

Consider that FormA attempts to launch FormB using the following code
pattern, with a sample exception handler:

private void LaunchFormB()
{
FormB formB = new FormB();
try
{
formB.Show();
}
catch(SecurityException ex)
{
MessageBox.Show("The authenticated user does not have permission to
view formB");
}
}

Now, with the global exception handlers in place, if a security exception
were thrown from within the FormB_Load() event handler, it will be handled
from within the Application.ThreadException event handler rather than the
catch block above. This causes the application to shut down, which is not
the desired behavior.

Does this mean that when the global exception handler,
Application.ThreadException, is in effect, no form can raise an exception
that is handled by a different form??? Is the code above considered bad
design for this reason???

Thanks for any thoughts!!!
 
D

David Levine

Thanks for the info, David. Sounds like you have spent alot of time tracking exception behavior relative to windows forms.

In your apps, do you often raise exceptions on one form, and then handle them on a different form?

I generally don't allow exceptions to escape from one form to be handled by another but there's no reason why that wouldn't work. As I said, make sure you register the ThreadException as an instance method of your first form.

If you wrap the FormB dialog in a try-catch and only register the AppDomain.UnhandledException I believe the problem you are having will go away - you wont get a UE and the exceptions will all be caught in the try-catch you have in FormA.


I think the reason is a little different. I believe (though I have no direct proof of this other then observations I have made) that the windows.form class implements its own exception filter and handler around the entire form.

When the exception filter sees an exception it means that the application did not catch it, so in terms of the form, it is unhandled. It then checks to see if anyone has subscribed to the Application.ThreadException event, and if so, it catches the exception and fires the event. If not, it allows the exception to continue up the call stack. I believe you are seeing these behaviors because of this, and it makes a difference if you subscribe to the ThreadException prior to creating FormA versus subscribing to the event from within FormA.

If you register the Application.ThreadException and AppDomain.UnhandledException in your application's main I get similar (but not identical) behavior to what you describe. However, rather then use static methods that are registered in your application's Main routine, try making them instance methods of FormA. When I did that the catch handler in FormA always got the exception and neither UE handler ever saw it.

IMO the current implemenation of Application.ThreadException ought to be revised.

I realize that there hasn't been any activity in this thread for a while. However, after making some modifications to my application based upon input that I received in this thread, I have noticed an interesting exception management-related behavior that I thought I would describe in order to try to get some of your thoughts.

Based upon the article in the June 2004 issue of MSDN magazine by Jason Clark on exception management, http://msdn.microsoft.com/msdnmag/issues/04/06/NET/, I added the following event handlers to my application's Main() method:

1. AppDomain.CurrentDomain.UnhandledException (for CLR unhandled exceptions)
2. Application.ThreadException (for windows forms unhandled exceptions)

Since I have dome this, however, I have noticed the following behavior: when exceptions are thrown from one form but handled from a different form, they end up being handled by the Application.ThreadException event handler rather than the intended exception handler on the other form. For example:

Consider that FormA attempts to launch FormB using the following code pattern, with a sample exception handler:

private void LaunchFormB()
{
FormB formB = new FormB();
try
{
formB.Show();
}
catch(SecurityException ex)
{
MessageBox.Show("The authenticated user does not have permission to view formB");
}
}

Now, with the global exception handlers in place, if a security exception were thrown from within the FormB_Load() event handler, it will be handled from within the Application.ThreadException event handler rather than the catch block above. This causes the application to shut down, which is not the desired behavior.

Does this mean that when the global exception handler, Application.ThreadException, is in effect, no form can raise an exception that is handled by a different form??? Is the code above considered bad design for this reason???

Thanks for any thoughts!!!
 
D

David Levine

I believe he is having this problem because he subscribed to the
ThreadException from a static method (main) and not from an instance method
in FormA. At the time he subscribed there was no window at all in the
system, so perhaps the Forms class is getting confused about where to
deliver the event to. IOW, if there was no window or pump running at the
time the event was subscribed to it may getting confused - it may be
expecting an instance and there is none associated with the event.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top