Execption Handling disection

B

Bruno Jouhier [MVP]

Alvin Bruney said:
i catch your drift about bad *good ideas, but you can't fault the language
for programmer misuse. That's bound to happen anyway. It doesn't/shouldn't
detract from value though IMO

I don't agree. Of course, no language will prevent a bad programmer from
writing bad code, but the language designers should make their best efforts
to encourage good programming practices and make it harder for people to
write bad code (see the old debates about goto). And, IMO, "checked
exceptions" encourage bad programming practices like using exceptions as a
flow control mechanism in situations where the flow should be expressed
though if/then/else tests (see my other post).

Bruno

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
Bruno Jouhier said:
Alvin Bruney said:
I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
That's not true at all. The hard and fast rule is throw an exception
when an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.

Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after
struggling a lot with them, I came to the conclusion that checked
exceptions are a "bad" good idea and that they do a lot more harm than
good, for many reasons, the main one being that they encourage the
programmer to catch exceptions locally instead of letting them bubble up
to a generic catch handler. The end result is code that is polluted with
catch clauses all over the place, and usually very poor exception
handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussions/CheckedExceptions from Bruce
Eckel, the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno
--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance
rarely
comes into it in my experience.

this is really one area that could use some clear guidelines. I've
seen
literature make statements like "exceptions are a performance hit" and
"they
should be reserved for exceptional situations". but nothing really
outlines
how they should really be used.

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.

for example. in a n-layered architecture, somewhere in the BL,
certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these
exceptional
situations since we are clearly expecting these conditions to occur?
and how
do we indicate these to the presentation? return values (as used in
many
samples) just seem to be such an antiquated method. in the end I went
with
the exception route, contrary to what many of the guidelines seem to
suggest.

Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.
 
D

David Levine

Bruno Jouhier said:
This is the central issue, and I think that the right way to go is to have
"rich" APIs, to make a clear distinction between "special cases" and
"exceptions" and to deal with special cases through return codes or
sentinel values rather than through EH mechanisms.

The basic idea is to have pairs of entry points like:

int Parse(string); // throws exception if string does not represent an int
bool TryParse(string, ref int); // returns false if string does not
represent an int

FileStream OpenFile(string name) // never returns null, throw exception if
file does not exist
FileStream TryOpenFile(string name) // returns null if file does not exist
(but still throws exception if file exists but cannot be opened)

Then, depending on the context, you call one or the other:

1) If you are in a situation where the "exception" must be dealt with
"locally", i.e. where you would put a try/catch around the call to catch a
FileNotFoundException, then you use the "Try" form and you don't catch any
exception locally.

2) Otherwise, you use the non-Try form and you let the exception bubble
up.

I think there are times where this approach works well and is the one I
would take, so partially agree.

But I think that this really only works well in the small, and it does not
scale well, either into large projects with large development teams, or just
with components in general. It essentially requires that all APIs would
double in size, one for each variant. It also means that you have
effectively moved flow control from the caller to the callee. One could
argue that this is still an improvement, but I'm not sure I agree.

It means that the surface area of your API just doubled, needs documentation
and testing, etc. A supplier of a library/component could never be sure
which APIs needed this doubling, because typically you don't always know how
someone else will use it, so you would have to double almost all APIs just
to be sure that caught all the cases.

Another side effect is that the number of combinations of calls into your
API just increased exponentially, and issues related to the
coherency/consistency between the different code paths need to be addressed.
For example, if there are two methods, each with a throw/non-throw variant,
then there are 4 combinations of calls that can be made...

{ Method1_Throws(); Method2_Throws(); }
{ Method1_Throws(); Method2_NonThrows(); }
{ Method2_Throws(); Method1_Throws(); }
{ Method2_Throws(); Method1_NonThrows(); }

And each combination needs to be tested to ensure that invariants are
maintained, fields are getting set/reset correctly, caches are correct, etc.

It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception and
returned a sentinel value (or vice-versa), otherwise the try-catch flow
control code goes back into the main body of code.

If you are in case 2, it means that the exception is really an exception,
i.e. something caused by an "abnormal" context of execution, and you do
not have any "local" action to take to deal with it. For example, if you
are parsing a stream that has been formatted by another program according
to a well defined protocol that ensures that ints are correctly formatted,
you should use Parse rather than TryParse. Also, if you are trying to open
a vital file that has been setup by your installation program, or if you
are trying to open a file that another method has created just before, you
should use OpenFile rather than TryOpenFile.

I agree with the intent but the problem is that in any decent sized project
there are thousands of decision points like the ones you describe and many
of them are a lot less obvious. It is not at all obvious when one should use
versus the other.

* You reduce the amount of EH code. You get rid of all the local
try/catch, and you only need a few try/catchall in "strategic" places of
your application, where you are going to log the error, alert the user,
and continue. With this scheme, exceptions are handled in a uniform way
(you don't need to discriminate amongst exception types) and the EH code
becomes very simple (only 2 basic patterns for try/catch) and very robust.
Agreed.


* You clearly separate the "application" logic from the "exception
handling" logic. All the "special cases" that your application must deal
with are handled via "normal" constructs (return values, if/then/else),
and all the "exceptional" cases are handled in a uniform way and go
through the try/catchall constructs that you have put in strategic places.

Again, I think this moves the flow control more then it eliminates it.
You can review the application logic without having to analyze complex
try/catch constructs spread throughout the code. You can also more easily
review the EH and verify that all exceptions will be propertly logged and
that the user will know about them if he needs to, without having to go
into the details of the application logic.

Agreed, with the proviso that I disagree with the notion that one should
never catch an exception that cannot be programmtically recovered from. I
believe there are many cases where it is beneficial to catch-wrap-throw as a
means of adding context information to the exception. The reason is that one
of the primary beneficiaries is the end-user. The user does not get any
benefit whatsoever from a message that says "Null reference exception"...it
might as well say "I fell down and can't get up.". What should the user do
to fix it and continue? Adding context will provide a more meaningful
message and ideally will aid the user in determining what to do to fix,
workaround, etc. the problem so they can get their work done. So my
recommendation is to use try-catch statements at strategic points as a means
of adding context, not as a program flow device.

I don't believe this will unduly affect performance because the code is
already in the exception path, so the additional overhead will likely not be
noticeable.
* It enforces clear "contracts" on your methods: OpenFile and TryOpenFile
do basically the same thing but they have different contracts and choosing
one or the other "means" something: if you read a piece of code and see a
call to TryOpenFile, you know that there is no guarantee "by design" that
the file will be there; on the other hand, if you see a call to OpenFile,
you know that the file should be there "by design" (it was created by
another method before, or it is a vital file created by the installation
program, etc.). Of couse, the fact that the file should be there "by
design" does not mean that it will always be there, but from your
standpoint, the fact that it would not be there is just as exceptional as
it being corrupt or the disk having bad sectors, and the best thing your
program can do in this case is log the error with as much information as
possible and tell the user that something abnormal happened.

* You will get optimal performance because the exception channel will only
be used in exceptional situations (and the cost of logging the exception
will probably outweight the cost of catching it anyway).

It is not necessary to log at each try-catch handler. I recommend logging at
the initial catch site and again (if necessary) if the exception is about to
leave the module boundary.
 
C

Cor Ligthert

Bruno,

I agree completly with you, however would it not be nice when we had some
nice classes to do some checking work.

In my opinion there is a lack of those. When you want samples.

Checking for proper typed strings about values.
Checking for proper typed strings about dates.

Now we have all to do that ourselves while it is a general problem.

Just my thought,

Cor
 
B

Bruno Jouhier [MVP]

David Levine said:
I think there are times where this approach works well and is the one I
would take, so partially agree.

But I think that this really only works well in the small, and it does not
scale well, either into large projects with large development teams, or
just with components in general. It essentially requires that all APIs
would double in size, one for each variant. It also means that you have
effectively moved flow control from the caller to the callee. One could
argue that this is still an improvement, but I'm not sure I agree.

It means that the surface area of your API just doubled, needs
documentation and testing, etc. A supplier of a library/component could
never be sure which APIs needed this doubling, because typically you don't
always know how someone else will use it, so you would have to double
almost all APIs just to be sure that caught all the cases.

Well, I have been using this methodology on a fairly large project (6 years,
more than 10 developpers involved, multi-tier application, multi-threaded
kernel, probably over a million lines of code at this stage) and it does
scale up!

It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things by
name, open files, load resources by name, etc. This is a very small fraction
of the APIs, and the overhead of doubling these calls is really not a
problem, especially if you have a good naming convention (we use Find for
the methods that throw and Lookup for the methods that return null). The
rest of the API only comes in one flavor.

Also, you don't always need to duplicate the entry points. Sometimes, it is
better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through some
kind of error object. For example, most of our parsing routines take an
"errors" argument. If you pass null, the parser will throw exceptions and
will always return a valid parse tree. If you pass an errors object, the
parser will collect the errors into it, and will return null if the parsing
fails. This is a typical example of clever API design, that gives us the two
flavors in one call without adding much complexity to the API (I did not
invent it, there are plenty of examples in the LISP APIs of emacs).
Another side effect is that the number of combinations of calls into your
API just increased exponentially, and issues related to the
coherency/consistency between the different code paths need to be
addressed. For example, if there are two methods, each with a
throw/non-throw variant, then there are 4 combinations of calls that can
be made...

{ Method1_Throws(); Method2_Throws(); }
{ Method1_Throws(); Method2_NonThrows(); }
{ Method2_Throws(); Method1_Throws(); }
{ Method2_Throws(); Method1_NonThrows(); }

And each combination needs to be tested to ensure that invariants are
maintained, fields are getting set/reset correctly, caches are correct,
etc.

It also does not address the problems developers face with 3rd party
libraries that do not supply the either/or API - a wrapper for each API
would need to be written that wrapped the one that threw the exception and
returned a sentinel value (or vice-versa), otherwise the try-catch flow
control code goes back into the main body of code.

Yes, this is a problem, and we setup such wrappers for calls that are used
in many places in our code (fortunately, this does not happen very often).
I agree with the intent but the problem is that in any decent sized
project there are thousands of decision points like the ones you describe
and many of them are a lot less obvious. It is not at all obvious when one
should use versus the other.

In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that). So,
the right thing to do is to call the "non-Try" version and let the exception
bubble up. In the remaining 5%, you know that you have to deal with a
special case (your functional analysis should tell you that) and you call
the "Try" version. So, this is actually rather straightforward, and the
programmers who joined our team and did not use this methodology before
adjusted rather quickly.

If you don't know which call to use, it means that you don't have a good
functional analysis and that you don't know which cases your program should
handle and where it should handle them, and which ones it should not handle
and only try to recover from.
Again, I think this moves the flow control more then it eliminates it.


Agreed, with the proviso that I disagree with the notion that one should
never catch an exception that cannot be programmtically recovered from. I
believe there are many cases where it is beneficial to catch-wrap-throw as
a means of adding context information to the exception. The reason is that
one of the primary beneficiaries is the end-user. The user does not get
any benefit whatsoever from a message that says "Null reference
exception"...it might as well say "I fell down and can't get up.".

Yes, this is actually one of the 2 try/catch patterns that we use

try {..}
catch (Exception ex) { throw new MyException("higher level message",
ex); }
What should the user do to fix it and continue? Adding context will
provide a more meaningful message and ideally will aid the user in
determining what to do to fix, workaround, etc. the problem so they can
get their work done. So my recommendation is to use try-catch statements
at strategic points as a means of adding context, not as a program flow
device.
Yes.


I don't believe this will unduly affect performance because the code is
already in the exception path, so the additional overhead will likely not
be noticeable.
Yes.


It is not necessary to log at each try-catch handler. I recommend logging
at the initial catch site and again (if necessary) if the exception is
about to leave the module boundary.

No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once. Our second try/catch pattern
is:

try {..}
catch (Exception ex) { LogAndAlertUser(ex); GetReadyToContinue(); }

Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it is
an encouragement to use exceptions as flow control in application logic. If
you don't use exception as flow control for special cases that should be
tested by your application logic, they should all bubble up the same way
(get wrapped with higher level message, and then logged).

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case, we
do our own logging in the entry points of our component and we rethrow the
exception (so that the someone else still gets an exception). But this is
just so that we don't loose the information if the client of our component
does not follow the rules (does not log every exception that he gets from
our component).

Bruno.
 
B

Bruno Jouhier [MVP]

Cor Ligthert said:
Bruno,

I agree completly with you, however would it not be nice when we had some
nice classes to do some checking work.

In my opinion there is a lack of those. When you want samples.

Checking for proper typed strings about values.
Checking for proper typed strings about dates.

Now we have all to do that ourselves while it is a general problem.

Yes. In our framework, we have one helper class for every basic type. These
helper classes contain handy methods that we don't find in the .NET
framework, and they include methods to verify the validity of strings, to
find the end of the formatted value in a larger string, etc.

Of course, it would be nicer if these methods were included in the .NET
framework.
 
A

Alvin Bruney [MVP - ASP.NET]

Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
H

Helge Jensen

well that's the whole point. It's poor design to catch any and everything in
the first place - a few situations warrant that kind of practice by the way.

Yes, I agree here.

But i don't see why functions should limit the errors that the caller is
allowed to know.
But in general, handling the exception indicates the callers intent to take
action for the issue. All other exceptions should be left to bubble up. So
yes, unexpected should be left to bring down the house. quite rightly.

You are "robbing" the caller of the ability to regain his control-flow,
which he has been kind enough to pass to your code, he may need it back
-- even if he behaved badly. For one thing the caller has no chance to
run cleanup-code (except for the global code registered in unexpected).

While you may feel the right to "punish" the caller for passing you
invalid input or whatnot, you should use the standard way to inform him
of that: a precise error-description in an exception.

If the user doesn't do proper exception-handling, and just ignores the
exception or something like that there really isn't much you can do for
him, and certainly he won't learn anything by having his application
close on him through unexpected().

Any component exhibiting the bevaiour you describe would be close to
useless to me, I would never dare invoke it's functions.
I disagree strongly. You will need to justify your position.

Decoupling removes knowledge of implementation. If you don't know the
implementations you can't even come up with a proper *guess* of the
possible exceptions thrown by implementations.

In JAVA, this results in interfaces without throw-declarations, or with
"throws Exception", which are both bad:

- no throw declaration means implementations have to catch, and in
friendly code translate and rethrow an unchecked, checked exceptions.
Even though the implementation has *no* valid error handling to actually
do, and would just like to pass the original exception if at all possible.

- throws Exception doesn't add any knowledge, so what have you gained?

The last option is to hazard a guess as to which checked exceptions
implemeners might throw. What happens then is that you find yourself
extending the list of thrown exceptions, untill you end up with throws
Exception. If you refuse to extend the "throws" clause when required you
end up either preventing some implementations or, if the coder rethrows
in an unchecked exception, having a mix exceptions: some wrapped, some not.

In C++, the problem is the same, you just don't see it untill runtime.
Further, wrapping often isn't even a possibility in C++ due to
memory-management.

I can see how c++ "throw()" helps the compiler, and possibly the user,
on destructors, but in other places it really doesn't help any more than
/* @throws .... */, and is often worse.
 
D

David Levine

You are "robbing" the caller of the ability to regain his control-flow,
which he has been kind enough to pass to your code, he may need it back --
even if he behaved badly. For one thing the caller has no chance to run
cleanup-code (except for the global code registered in unexpected).

This is not accurate. The caller can use try-finally constructs to ensure
that cleanup code is run.
While you may feel the right to "punish" the caller for passing you
invalid input or whatnot, you should use the standard way to inform him of
that: a precise error-description in an exception.

If the user doesn't do proper exception-handling, and just ignores the
exception or something like that there really isn't much you can do for
him, and certainly he won't learn anything by having his application close
on him through unexpected().

Again, proper use of try-finally constructs should ensure proper cleanup for
the typical case.
 
D

David Levine

Well, I have been using this methodology on a fairly large project (6
years, more than 10 developpers involved, multi-tier application,
multi-threaded kernel, probably over a million lines of code at this
stage) and it does scale up!

Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?
It does not mean doubling every API, there are actually just a few calls
that need to be doubled, mostly calls that parse strings, lookup things by
name, open files, load resources by name, etc. This is a very small
fraction of the APIs, and the overhead of doubling these calls is really
not a problem, especially if you have a good naming convention (we use
Find for the methods that throw and Lookup for the methods that return
null). The rest of the API only comes in one flavor.

What other conventions have you adopted to support this?
Also, you don't always need to duplicate the entry points. Sometimes, it
is better to pass an extra parameter to indicate whether errors should be
signaled through exception or whether they should be returned through some
kind of error object. For example, most of our parsing routines take an
"errors" argument. If you pass null, the parser will throw exceptions and
will always return a valid parse tree. If you pass an errors object, the
parser will collect the errors into it, and will return null if the
parsing fails. This is a typical example of clever API design, that gives
us the two flavors in one call without adding much complexity to the API
(I did not invent it, there are plenty of examples in the LISP APIs of
emacs).

I've seen other APIs that take a "throwOnError" argument but I am not fond
of it (yet). I prefer a single path throw the code, not two. Have you ever
encountered problems related to this?
Yes, this is a problem, and we setup such wrappers for calls that are used
in many places in our code (fortunately, this does not happen very often).
In 95% of the cases, there is not much you can do "locally" about the
special case/exception (your functional analysis should tell you that).
So, the right thing to do is to call the "non-Try" version and let the
exception bubble up. In the remaining 5%, you know that you have to deal
with a special case (your functional analysis should tell you that) and
you call the "Try" version.

What kind of functional analysis are you referring to? Perhaps you analyze
things a bit differently then what I am accustomed to.
No, we only log when we don't rethrow, this way you know that every
exception will be logged and logged only once.


I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++ Win32
to managed code no one really knew what best practices to apply...it
evolved. As a result the original code base was littered with empty
try-catches and exceptions were getting swallowed, converted, etc. all over.
My reaction to that was to establish requirements to never allow an
exception to get silently dropped again. The result was double-logging - the
first time when it was initially thrown and the last time when it was
handled or left the module boundary - this way if it got dropped somewhere
in the middle we would be able to detect it.

The 2nd practical result was that I made it a requirement that all swallowed
exceptions must call a central method (called SwallowException) that by
default printed out the exception message to the Trace - one of the
arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed work
to either remove the source of the exception or do some other rewrite. There
are circumstances when swallowing an exception makes sense, and most of
those fall into the category that we are discussing - when to throw versus
return some other value. IOW, wrapping the API into a call that does not
throw would accomplish the same thing, and I'll probably switch over to
using that mechanism - it makes sense.


Notes: in both patterns, we catch "all exceptions". So, we are always
violating the rule that says that you should only catch "specific"
exceptions (this is one of FxCop's rule). This rule is stupid because it
is an encouragement to use exceptions as flow control in application
logic. If you don't use exception as flow control for special cases that
should be tested by your application logic, they should all bubble up the
same way (get wrapped with higher level message, and then logged).

Agreed. It is a silly rule.

There is nevertheless one case where we log and rethrow, this is when we
design an API that someone else will be using, and when we know that this
someone else is not very rigorous about logging exceptions. In this case,
we do our own logging in the entry points of our component and we rethrow
the exception (so that the someone else still gets an exception). But this
is just so that we don't loose the information if the client of our
component does not follow the rules (does not log every exception that he
gets from our component).

That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.
 
H

Helge Jensen

This is not accurate. The caller can use try-finally constructs to ensure
that cleanup code is run.

This specific discussion concerns C++. The topic is the effects of
having throw-declarations on functions, which C# doesn't have.

My point was, that the "feature" throw declarations in c++ is to prevent
cleanup using the C++ correspondant to finally.
Again, proper use of try-finally constructs should ensure proper cleanup for
the typical case.

I don't understand how that is related to the specific argument I made.

I state that nothing is gained by using throw-clauses, and that if you
just removed them the caller would be better off.
 
B

Bruno Jouhier [MVP]

David Levine said:
Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?

Overall, I would say that it works quite well because the rules and the
patterns are simple. We sometimes find a bit of resitance with developers
that join the team and that are used to or have learnt other methods (or
quite often no method at all), but they have to accept the rules, and after
they do, they see the benefits and they don't question it any more (so far).

Some developers take a bit of time to adjust because they are caught in the
"error code mindset" and they feel guilty about letting exceptions bubble up
without catching them (not testing an error code is very bad, but not
catching an exception locally is usually the best thing you can do). But
with a bit of guidance, they quickly get the point. One of the reasons why
it works quite smoothly is because they end up writing very little try/catch
constructs (almost none in the business logic, and the ones that they need
in the framework are already in place).

So, it is easier to "not write any try/catch" than to learn complex rules
about how you should write them. On the other hand, they are encouraged to
use "throw" rather freely (as soon as they detect a case that should never
happen). For example most of our switch/default contain something like throw
new MyException("bad case value: " + val).

What other conventions have you adopted to support this?

Not much actually. The main other one is the Try prefix for the version that
does not throw.

Also, most of our code was in J# and we don't have indexers by name. So, it
was natural to use Find/Lookup for these methods. In C#, the natural way to
write these lookup APIs is with indexers and then you need another
convention (for ex, indexer implements the "Find" version and the "Lookup"
version is provided via a separate method, or an additional argument to the
indexer).

I've seen other APIs that take a "throwOnError" argument but I am not fond
of it (yet). I prefer a single path throw the code, not two. Have you ever
encountered problems related to this?

Here is how we typically use this dual parse method:

* when we are parsing inside an "authoring tool", we pass a non null
"errors" object and we have some logic after the parsing call to display the
errors contained in this errors object. The user is supposed to analyze
these errors and fix them.

* but in most of the other cases where we need to parse, we don't usually
have any mean to "fix" the errors, because we are not "authoring" any more,
we are usually parsing expressions that someone else has written and the
actual user of our program does not have a clue about them (the program may
even be non-interactive). So, we pass null, and if the expression is
invalid, an exception will be thrown, logged and the program will recover
where we designed it to recover (not locally, higher in the call chain). If
you think of it, there is not much more you can do.

This example shows another thing: the non-Try version is the one that is
used 95% of the time, the Try version is only used in special cases where we
can handle the problem locally. This is another reason why this methodology
is easy to learn: you just use the non-Try version by default (and
exceptions will bubble up in this case), and when you need to handle the
problem "locally" (which is not often), you use the Try version (and in both
cases, you don't write any try/catch locally).
What kind of functional analysis are you referring to? Perhaps you analyze
things a bit differently then what I am accustomed to.

This level of functional analysis is not always "written" because it is
sometimes too low level. But if it is not written, the coder should know
(for example from general design guidelines) whether he has to do something
locally or not. If he does not know, he should have an architect that can
advise him on this (and usually he will become autonomous after consulting
the architect a few times).
I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++
Win32 to managed code no one really knew what best practices to apply...it
evolved. As a result the original code base was littered with empty
try-catches and exceptions were getting swallowed, converted, etc. all
over. My reaction to that was to establish requirements to never allow an
exception to get silently dropped again. The result was double-logging -
the first time when it was initially thrown and the last time when it was
handled or left the module boundary - this way if it got dropped somewhere
in the middle we would be able to detect it.

If you have placed catchall handlers at the bottom of all the stacks that
get into your code (event dispatcher + main loops of the threads that you
create + your main + API entry points that you expose to the outside +
remoting sinks), and if you stick to the patterns that I described, you have
the garantee that all exceptions will be logged and logged only once. The
advantage of logging at the bottom of the call chain (where you recover)
rather than in the first catch handler is that you get an exception that has
been wrapped, so you can log all the info: low level message at the origin
of the exception + all the higher level messages that have been added when
the exception bubbled up.
The 2nd practical result was that I made it a requirement that all
swallowed exceptions must call a central method (called SwallowException)
that by default printed out the exception message to the Trace - one of
the arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed
work to either remove the source of the exception or do some other
rewrite. There are circumstances when swallowing an exception makes sense,
and most of those fall into the category that we are discussing - when to
throw versus return some other value. IOW, wrapping the API into a call
that does not throw would accomplish the same thing, and I'll probably
switch over to using that mechanism - it makes sense.

Yes, you should give it a try.

Also, I am very assertive when I describe this scheme, and it may look like
I won't accept any deviation from it. But in practice, there are of course
some deviations. I just try to minimise them, and the best way it to try to
enforce the methodology as much as possible. So, if you try it, do the same,
give yourself some room for "local" try/catch (but you will see that you can
very often do without and that the result is better).

Bruno
 
T

Tom Shelton

Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.

I wouldn't mind that, really. What I don't want is Java's checked
exceptions...
 
A

Alvin Bruney [MVP - ASP.NET]

that java part went over my head, i don't know java so i'm not getting much
of that checked/unchecked exception issue.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
Tom Shelton said:
Alvin Bruney [MVP - said:
Yes, that's what i am talking about. Why can't c# have that? I miss that.
It's painful especially when using code tucked away in a library that i
didn't write. So now i have to turn around and catch all exceptions - and
that's not elegant at all.

I wouldn't mind that, really. What I don't want is Java's checked
exceptions...
 
A

Alvin Bruney [MVP - ASP.NET]

Well, I have been using this methodology on a fairly large project (6
You making this up Bruno? You guys hiring :)

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
B

Bruno Jouhier [MVP]

Alvin Bruney said:
You making this up Bruno? You guys hiring :)

I was actually underestimating. I just ran wc to count the lines and I get
1.7 MLines J# + 280 KLines C# (don't know if I should feel good or bad about
these figures). Not all was written by hand, fortunately, the boring stuff
was generated by a modeling tool.

You'd like to move to France?

Bruno.
--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
David Levine said:
Well, I must admit that is a good argument. Does it require a lot of
developer discipline, or does it tend to be self-regulating?


What other conventions have you adopted to support this?


I've seen other APIs that take a "throwOnError" argument but I am not
fond of it (yet). I prefer a single path throw the code, not two. Have
you ever encountered problems related to this?



What kind of functional analysis are you referring to? Perhaps you
analyze things a bit differently then what I am accustomed to.



I tend to disagree but perhaps for practical reasons that probably don't
apply to most situations today. When we first transitioned from C/C++
Win32 to managed code no one really knew what best practices to
apply...it evolved. As a result the original code base was littered with
empty try-catches and exceptions were getting swallowed, converted, etc.
all over. My reaction to that was to establish requirements to never
allow an exception to get silently dropped again. The result was
double-logging - the first time when it was initially thrown and the last
time when it was handled or left the module boundary - this way if it got
dropped somewhere in the middle we would be able to detect it.

The 2nd practical result was that I made it a requirement that all
swallowed exceptions must call a central method (called SwallowException)
that by default printed out the exception message to the Trace - one of
the arguments to the method is the reason why it was ok to swallow the
exception. As a result we found a lot of places in the code that needed
work to either remove the source of the exception or do some other
rewrite. There are circumstances when swallowing an exception makes
sense, and most of those fall into the category that we are discussing -
when to throw versus return some other value. IOW, wrapping the API into
a call that does not throw would accomplish the same thing, and I'll
probably switch over to using that mechanism - it makes sense.




Agreed. It is a silly rule.



That sounds like the same sort of rule I use, which is to log when an
exception leaves the boundaries of the module.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top