Execption Handling disection

R

rawCoder

I have read that Exception Handling is expensive performance wise ( other
than Throw ) , but exactly how ?
Please consider the following example ...

////////////////// Code Block 1 //////////////////////////////////
Try
{
for ( ... ){// Code that might throw exception}
}catch(...){}
//////////////////////////////////////////////////////////////////////////

////////////////// Code Block 2 //////////////////////////////////
for ( ... )
{
Try {// Code that might throw exception}
catch(...){}
}
//////////////////////////////////////////////////////////////////////////
(NOTE: Code in CSharp for compactness)

Is the declaration of try/catch block one of the thing that is performance
intensive, which will mean Code Block 2 is better ?

Any comments, links are appreciated.

Thanx in advance
rawCoder
 
D

Dale Preston

Greetings,

I think that what you've read about is the relative difference between
allowing the exception to be thrown versus testing for potential exception
conditions in code. For instance, if you are going to convert the value
from a TextBox to an integer, you could use int.Parse and catch the
exception as in your example or, alternatively, you could test the contents
of the TextBox to make sure that it can be converted to an integer: is it
all numeric and is it short enough to be between int.MaxValue and
int.MinValue?

The latter option may take, just for arguments' sake, 20 to 50 lines of
code, and the try/catch may only take half a dozen lines of code. Many new
C# developers look at that ratio and immediately start throwing try/catch
blocks at everything, either in the mistaken belief that it must be faster -
it's less code, or out of laziness because it takes less brain work and
keystrokes.

The fact is that having the exception thrown is so much more expensive, in
terms of CPU cycles and time, than testing the value ahead of time, it could
be from 10s of times more expensive to well over 1000 times more expensive,
that it is well worth the effort to code the validation first.

So the answer to your question is, when you read discussions about the
expense of error handling versus error prevention, the discussion is
probably not as much about the expense of adding the try/catch handler as it
is about allowing the exception to be thrown in the first place.

If you can test before calling your "Code that might throw exception" and
respond to the condition yourself, then don't use the try/catch block at
all. If you still need to use the try/catch, put it outside the for loop.
Creating the try/catch once has to be better than creating it potentially
hundreds or thousands of times.

HTH

Dale Preston
MCAD, MCDBA, MCSE
 
A

Alvin Bruney [MVP - ASP.NET]

what exactly is an exceptional circumstance?

You should realize by struggling to answer this question that it doesn't
define an exception at all because an exceptional circumstance is not
necessarily an exception. For instance, reading past an end of file is
certainly not exceptional circumstance but it is considered an exception. On
the other hand, dividing an integer by zero may be exceptional but not
necessarily an exception in certain math applications.

I prefer to define an exception as a violation of an implied assumption. If
you read from a file, the implicit assumption is that there is no more data
when the end of file marker is reached therefore it is an exception to read
past the end of file. Likewise for divide by zero conditions where the
result must be a finite positive number.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
B

Bruce Wood

The rule that is bandied about in this newsgroup goes something like
this.

Use exceptions only for cases that you never expect to come up in the
normal operation of your code. Or, put another way, there should be no
common scenario that results in an exception. Exceptions are, as
SemiproCappa said... exceptional.

For example, if it happens all the time in your system that you look up
a customer number and don't find a customer record, then you should
code a test for that. If, on the other hand, when you look for a
customer record you are "always" supposed to find one, not finding one
is an exception.

Personally, I don't always follow this rule. When parsing user input, I
_do_ use Int32.Parse() and catch the exception, just because the user
can't type fast enough to make a single exception a performance
problem. I'd never do that when reading thousands of rows of database
data, though: I test for invalid field contents "manually" in code
because thousands of exceptions would be a performance problem.
 
J

Jeff Louie

In the code sample posted, I would not worry about which is more
efficient, but that they represent two fundamentally different
algorithms. In Code Block 1, the for loop is broken on an exception. In
Code Block 2, the for loop may continue on exception.

As for exceptions:

Herb Sutter concludes "Distinguish between errors and non-errors. A
failure is an error if and only if it violates a function's ability to
meets its callees' pre-conditions, to establish its own post-conditions,
or to establish an invariant it shares responsibility for maintaining.
Every thing else is not an error...

Finally, prefer to use exceptions instead of error codes to report
errors. Use error codes only when exceptions cannot be used ... and for
conditions that are not errors."


Regards,
Jeff
 
J

Jon Skeet [C# MVP]

Dale Preston said:
I think that what you've read about is the relative difference between
allowing the exception to be thrown versus testing for potential exception
conditions in code. For instance, if you are going to convert the value
from a TextBox to an integer, you could use int.Parse and catch the
exception as in your example or, alternatively, you could test the contents
of the TextBox to make sure that it can be converted to an integer: is it
all numeric and is it short enough to be between int.MaxValue and
int.MinValue?

The latter option may take, just for arguments' sake, 20 to 50 lines of
code, and the try/catch may only take half a dozen lines of code. Many new
C# developers look at that ratio and immediately start throwing try/catch
blocks at everything, either in the mistaken belief that it must be faster -
it's less code, or out of laziness because it takes less brain work and
keystrokes.

Less brainwork is good. In general, the less code I have, the less of
it can be wrong. I often gladly take a performance hit where it's
unimportant in order to get cleaner, more easily readable code. I'd
always rather have a program which does its job properly in 10 seconds
than one which produces the wrong answer in half the time.
The fact is that having the exception thrown is so much more expensive, in
terms of CPU cycles and time, than testing the value ahead of time, it could
be from 10s of times more expensive to well over 1000 times more expensive,
that it is well worth the effort to code the validation first.

For converting the value in a TextBox? No it's not.

Throwing an exception is slower than not throwing an exception, but
exceptions aren't nearly as expensive as some people seem to think.

My laptop can throw an exception a hundred thousand times in a second.
Do you think the potential delay of 0.01 *milliseconds* is going to be
even the slightest bit noticeable to a user?

Exceptions are only likely to cause noticeable performance problems
when they're being thrown a *lot* - such as in a very short loop
executed a large number of times.

There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance rarely
comes into it in my experience.
 
C

Cor Ligthert

RawCoder,

What is the difference, somewhere is set a piece of code to catch an event
and to handle that when the event of that type is raised. The structure of
the language tells that it has to be on that place.

I am almost sure that Jay will give an answer as well in this thread, when
he misses it, than you have here a message from him about general
exceptions. (There is more so when he not sees this, you can as well search
in Google newsgroups for "Jay general exceptions")

http://groups-beta.google.com/group/microsoft.public.dotnet.languages.vb/msg/27fe1015e7ef70e0

I hope this helps,

Cor
 
J

Jon Skeet [C# MVP]

Daniel Jin said:
this is really one area that could use some clear guidelines. I've seen
literature make statements like "exceptions are a performance hit" and "they
should be reserved for exceptional situations". but nothing really outlines
how they should really be used.

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these exceptional
situations since we are clearly expecting these conditions to occur? and how
do we indicate these to the presentation? return values (as used in many
samples) just seem to be such an antiquated method. in the end I went with
the exception route, contrary to what many of the guidelines seem to
suggest.

Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.
 
D

Dale Preston

Well, luckily, the TryParse method that is in the Double class (ok...Double
struct for the nit-picky) will be included in all of the integral types in
V2.0 of the .Net framework. That should reduce some coding-by-exception
practices.

But I agree with you that there is no clear cut or single answer.

No matter what, good code is always better than bad code, and testing for
likely errors is always better code than just throwing errors - from a
coding perspective. From a business perspective, we all know that not all
projects have the budget and schedule to allow us to do everything by best
practices. Sometimes we have to make compromise, following the highest
priority best practices and letting some go because of budget and time
constraints.

So, in that regard, the best code is the code that gets the client or user
the functionality they require, with risks, bugs, and performance all at
levels they can live with, and on time and on budget - even if that means
that we code by exception at times.

Dale
 
A

Alvin Bruney [MVP - ASP.NET]

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.
That's not true at all. The hard and fast rule is throw an exception when an
assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
B

Bruno Jouhier [MVP]

Alvin Bruney said:
That's not true at all. The hard and fast rule is throw an exception when
an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.

Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after struggling a
lot with them, I came to the conclusion that checked exceptions are a "bad"
good idea and that they do a lot more harm than good, for many reasons, the
main one being that they encourage the programmer to catch exceptions
locally instead of letting them bubble up to a generic catch handler. The
end result is code that is polluted with catch clauses all over the place,
and usually very poor exception handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussions/CheckedExceptions from Bruce Eckel,
the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno
--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
H

Helge Jensen

Alvin said:
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.

C++ (thankfully :) doesn't have compile-time checked exceptions like
JAVA, but runtime-checked. C++ throw declarations have semantics which
severely limits their usefullness.

A function-invocation which throws something not declared in the throw
clause doesn't get to pass that exception up the stack but instead
invokes std::unexpected. std::unexpected is not allowed to return, but
must abort the program, throw a "bad_exception", or throw an exception
in the throw clause. Note that std::unexpected is a global function, so
you really can't expect anything other than "bad_exception" or the
default: std::terminate().

This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.

Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull.

It's nice to have destructors that doesn't throw, but writing "throw()"
really doesn't help any more than /* doesn't throw */.
 
J

Jon Skeet [C# MVP]

That's not true at all. The hard and fast rule is throw an exception when an
assumption is violated.
That's it. That's all you need to know and do.

That's just a restatement of the problem in terms of assumptions rather
than exceptions. (I'd use the word "contract" rather than "assumption"
though - if I *assumed* a parameter would be valid, I wouldn't then
check its validity and throw an exception before doing any work.)

This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)
C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt this
approach - it would make things a lot easier. The absence of explicitly
published assumptions is one reason for confusion. Notice how that confusion
is absent in well written C++.

Without knowing C++ well, I don't know exactly what you mean. Could you
elaborate?
 
D

David Levine

Jon Skeet said:
That's just a restatement of the problem in terms of assumptions rather
than exceptions. (I'd use the word "contract" rather than "assumption"
though - if I *assumed* a parameter would be valid, I wouldn't then
check its validity and throw an exception before doing any work.)
This doesn't determine, as far as I can see, when a business constraint
should trigger an exception and when it should trigger some other kind
of information passing (whether that's return value or whatever else.)

I agree. I think the question of when to throw an exception versus returning
a sentinel value is one of the least understood and most error prone aspects
of .NET/C#.

I find that in practical code it often gets messy because many developers
wind up dealing with both exceptions and error codes, rather then one or the
other, and the result is that it actually increases complexity rather then
reduces it.

Many of the decisions revolve around issues of performance. For example, if
the business logic is in a server that is dealing with hundreds or thousands
of requests/transactions in a batch format that it might make more sense to
identify errors in a status structure, one per transaction, rather then
throw an exception for every error across a machine boundary.

But if performance is the only concern then eventually faster processors
will make some of those concerns less important. I find the issue of the
number of code paths to be more troubling. If the code deals both with
exceptions and still other forms of error codes then the result is less
reliable code, not more. Essentially, if I see code using exceptions to
determine flow control then there's a problem with the design. But I also
find it troubling to examine code that has no exception handling at all -
relying on a UE handler is poor design.
 
J

Jon Skeet [C# MVP]

David Levine said:
I agree. I think the question of when to throw an exception versus returning
a sentinel value is one of the least understood and most error prone aspects
of .NET/C#.
Yup.

I find that in practical code it often gets messy because many developers
wind up dealing with both exceptions and error codes, rather then one or the
other, and the result is that it actually increases complexity rather then
reduces it.

Absolutely. Dealing with both is a nightmare.
Many of the decisions revolve around issues of performance. For example, if
the business logic is in a server that is dealing with hundreds or thousands
of requests/transactions in a batch format that it might make more sense to
identify errors in a status structure, one per transaction, rather then
throw an exception for every error across a machine boundary.

Yup - there are certainly times when performance *is* important and
exceptions would prove prohibitive. Unfortunately, the "exceptions are
slow" mantra has gone *way* over the top, to the extent where people
don't really ask themselves whether the performance hit is actually a
problem in their situation.
But if performance is the only concern then eventually faster processors
will make some of those concerns less important. I find the issue of the
number of code paths to be more troubling. If the code deals both with
exceptions and still other forms of error codes then the result is less
reliable code, not more. Essentially, if I see code using exceptions to
determine flow control then there's a problem with the design. But I also
find it troubling to examine code that has no exception handling at all -
relying on a UE handler is poor design.

What exactly do you mean by "using exceptions to determine flow
control" though? To me, using exceptions to quickly and reliably (in
the absence of anything deliberately catching an exception too early)
aborting a potentially "deep" operation *is* using them to determine
flow control, but in a good way. What kind of thing are you thinking of
as being definitely bad?
 
A

Alvin Bruney [MVP - ASP.NET]

i catch your drift about bad *good ideas, but you can't fault the language
for programmer misuse. That's bound to happen anyway. It doesn't/shouldn't
detract from value though IMO

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
Bruno Jouhier said:
Alvin Bruney said:
That's not true at all. The hard and fast rule is throw an exception when
an assumption is violated.
That's it. That's all you need to know and do.

C++ used to make it infinitely easier to explicitly publish implicit
assumptions thru a methods signature. I'm not sure why c# did not adopt
this approach - it would make things a lot easier. The absence of
explicitly published assumptions is one reason for confusion. Notice how
that confusion is absent in well written C++.

Java has this "feature" too, it is called "checked exceptions". When I
started to use Java, in 96, I thought that it was a good idea because it
seems to enforce stronger compile time verifications, but after struggling
a lot with them, I came to the conclusion that checked exceptions are a
"bad" good idea and that they do a lot more harm than good, for many
reasons, the main one being that they encourage the programmer to catch
exceptions locally instead of letting them bubble up to a generic catch
handler. The end result is code that is polluted with catch clauses all
over the place, and usually very poor exception handling in the end.

So, "checked exception" is a bad thing, and actually, if you analyze the
Java libraries, you will see that all the early ones (the JDK of course)
made extensive use of them, and that the more recent ones tend to reject
them. And some Java gurus advocate against them (see
http://www.mindview.net/Etc/Discussions/CheckedExceptions from Bruce
Eckel, the author of "Thinking in Java").

So, the C# designers made the right choice here.

Bruno
--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
Jon Skeet said:
There are good reasons for not throwing exceptions when they're not
suitable, in terms of readability and code flow, but performance
rarely
comes into it in my experience.

this is really one area that could use some clear guidelines. I've
seen
literature make statements like "exceptions are a performance hit" and
"they
should be reserved for exceptional situations". but nothing really
outlines
how they should really be used.

I suspect that's because it varies so much, to be honest. It's
difficult to get hard and fast rules which apply in all situations.

for example. in a n-layered architecture, somewhere in the BL, certain
validations will be performed based on various business rules. and
operations should fail when pre-requisits are not met. are these
exceptional
situations since we are clearly expecting these conditions to occur?
and how
do we indicate these to the presentation? return values (as used in
many
samples) just seem to be such an antiquated method. in the end I went
with
the exception route, contrary to what many of the guidelines seem to
suggest.

Indeed, I probably err more towards exceptions than away from them too.
They're so much easier than checking return values everywhere to abort
an operation simply :)

For me, it comes down to what makes it clearest to everyone what's
going on. There are times when exceptions clearly *aren't* appropriate
(such as terminating the iteration of a collection by keeping going
until an exception is thrown, when it's perfectly easy to avoid that in
the first place) but there are lots of times when they're the natural
solution but people avoid them because they've been told that
exceptions are hugely expensive.

That said, where appropriate it's good to have a validation method
which can validate parameters without attempting to actually perform an
operation - and the operation itself can throw the exception if it's
still given invalid parameters etc.
 
A

Alvin Bruney [MVP - ASP.NET]

This means that "throw InvalidArgument" doesn't publish any assumptions,
it restricts which errors the caller is *allowed* to react on.

well that's the whole point. It's poor design to catch any and everything in
the first place - a few situations warrant that kind of practice by the way.
But in general, handling the exception indicates the callers intent to take
action for the issue. All other exceptions should be left to bubble up. So
yes, unexpected should be left to bring down the house. quite rightly.
Especially anywhere using decoupling, type-limiting on exceptions just
really isn't that usefull.
I disagree strongly. You will need to justify your position.

--
Regards,
Alvin Bruney - ASP.NET MVP

[Shameless Author Plug]
The Microsoft Office Web Components Black Book with .NET
Now available @ www.lulu.com/owc
 
D

David Levine

What exactly do you mean by "using exceptions to determine flow
control" though? To me, using exceptions to quickly and reliably (in
the absence of anything deliberately catching an exception too early)
aborting a potentially "deep" operation *is* using them to determine
flow control, but in a good way. What kind of thing are you thinking of
as being definitely bad?
The prototypical example is using int.Parse rather then int.TryParse. Back
before I knew there was a TryParse I used to write code like this...

try
{
int.Parse(someString);
// execute code based on fact that input was an integer
}
catch(Exception)
}
// do something different in this path and continue executing.
}

It really gets bad when I see code that catches an exception and returns a
bool (false) (i.e. converts an exception to a return value). Quite often the
routine that sees the bool false return value turns around and throws an
exception (converts it back the other way...ouch!).

These are extreme examples, but I have seen them.

Bottom line - the distinction between a failure that violates the
programmatic assumptions of a method that should result in an exception and
the normal failures that occur (record not found) and which should use a
return value is very thin and blurry. Coming up with practical guidelines to
cover all cases is IMO very difficult; not quite pointless but almost.

I think it would also be useful to have the language or environment itself
aid us in laying out exception handling policies and implementations. In
other words, which modules are supposed to catch which exceptions that can
be generated by other modules? It isn't possible to look at a method,
determine which exceptions can escape it, and which module, method,
whatever, is supposed to handle it. We currently get no help whatsoever from
the environment to aid us. At least with sentinel values we knew that it was
always the caller's responsibility to deal with it, even it that meant to
explicitly pass it upstream. Now we don't even know that much. I'm not
saying I prefer return values (I don't) but the current sitiuation really
puts a larger burden on the designer then the previous system.
 
T

Tom Shelton

Without knowing C++ well, I don't know exactly what you mean. Could you
elaborate?

I think he is refering to the C++ throw specification... It's similar
to Java's throws - but it isn't as strict :) It might look something
like:

// no throw
void aFunc () throw ();

// the function can only throw bad_alloc
void anotherFunc () throw (bad_alloc);

There is special behavior if you throw something not in the exception
list, but I don't remember the exact details... (boy, it's been a while
:)
 
B

Bruno Jouhier [MVP]

This doesn't determine, as far as I can see, when a business constraint
I agree. I think the question of when to throw an exception versus
returning a sentinel value is one of the least understood and most error
prone aspects of .NET/C#.

This is the central issue, and I think that the right way to go is to have
"rich" APIs, to make a clear distinction between "special cases" and
"exceptions" and to deal with special cases through return codes or sentinel
values rather than through EH mechanisms.

The basic idea is to have pairs of entry points like:

int Parse(string); // throws exception if string does not represent an int
bool TryParse(string, ref int); // returns false if string does not
represent an int

FileStream OpenFile(string name) // never returns null, throw exception if
file does not exist
FileStream TryOpenFile(string name) // returns null if file does not exist
(but still throws exception if file exists but cannot be opened)

Then, depending on the context, you call one or the other:

1) If you are in a situation where the "exception" must be dealt with
"locally", i.e. where you would put a try/catch around the call to catch a
FileNotFoundException, then you use the "Try" form and you don't catch any
exception locally.

2) Otherwise, you use the non-Try form and you let the exception bubble up.

If you are in case 1, it means that the exception that you would be catching
is not really an exception, it is a "special case" that you are actually
expecting and upon which you need to react in a special way. For example, if
you are parsing user input, you know in advance that the input may not be
valid, and you know that you have to handle this "special case". So, you
should use TryParse. Also, if you are trying to open a file by looking up up
a list of search paths (if not found in path1, try path2, ..), then, the
fact that the file does not exist is not really an exception, it is
something that is part of your design, and you should use TryOpenFile.

If you are in case 2, it means that the exception is really an exception,
i.e. something caused by an "abnormal" context of execution, and you do not
have any "local" action to take to deal with it. For example, if you are
parsing a stream that has been formatted by another program according to a
well defined protocol that ensures that ints are correctly formatted, you
should use Parse rather than TryParse. Also, if you are trying to open a
vital file that has been setup by your installation program, or if you are
trying to open a file that another method has created just before, you
should use OpenFile rather than TryOpenFile.

Of course, this approach forces you to duplicate some entry points in your
APIs, but it has many advantages:

* You reduce the amount of EH code. You get rid of all the local try/catch,
and you only need a few try/catchall in "strategic" places of your
application, where you are going to log the error, alert the user, and
continue. With this scheme, exceptions are handled in a uniform way (you
don't need to discriminate amongst exception types) and the EH code becomes
very simple (only 2 basic patterns for try/catch) and very robust.

* You clearly separate the "application" logic from the "exception handling"
logic. All the "special cases" that your application must deal with are
handled via "normal" constructs (return values, if/then/else), and all the
"exceptional" cases are handled in a uniform way and go through the
try/catchall constructs that you have put in strategic places. You can
review the application logic without having to analyze complex try/catch
constructs spread throughout the code. You can also more easily review the
EH and verify that all exceptions will be propertly logged and that the user
will know about them if he needs to, without having to go into the details
of the application logic.

* It enforces clear "contracts" on your methods: OpenFile and TryOpenFile do
basically the same thing but they have different contracts and choosing one
or the other "means" something: if you read a piece of code and see a call
to TryOpenFile, you know that there is no guarantee "by design" that the
file will be there; on the other hand, if you see a call to OpenFile, you
know that the file should be there "by design" (it was created by another
method before, or it is a vital file created by the installation program,
etc.). Of couse, the fact that the file should be there "by design" does not
mean that it will always be there, but from your standpoint, the fact that
it would not be there is just as exceptional as it being corrupt or the disk
having bad sectors, and the best thing your program can do in this case is
log the error with as much information as possible and tell the user that
something abnormal happened.

* You will get optimal performance because the exception channel will only
be used in exceptional situations (and the cost of logging the exception
will probably outweight the cost of catching it anyway).

So, when I see "local" EH constructs that catch "specific" exceptions, my
first reaction is: API Problem!
In some cases, the caller is at fault and he should use another API or
perform some additional tests before the call.
In other cases, the callee is at fault because he provided an incomplete API
that does not provide a way to perform this specific test without catching
an exception. In this second case, we have the review the API and enhance it
(unless it is a third party API that we don't control, in which case we
usually introduce a helper method that does the dirty try/catch work and
exposes the "richer" API).

Morale: Don't "program" with exceptions (by catching specific exceptions)
but design good APIs what will let you program without them (by letting the
real exceptions bubble up to a generic catch handler).

Bruno.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top