c# 2004 (Whidbey)

  • Thread starter Arman Oganesian
  • Start date
N

ncaHammer

Alvin Bruney said:
write unit tests before you start to write code?
write unit tests before you start to debug broken code?

yes, i do always

special with GUI stuff, i have a test suite which does vectorization and
checks for proper painting, so i can catch bugs such as improper
painting of a listview items etc (one other does face recognition too)

now i am writing a database test suite, i can estimate that a single test
with real data should finish 10 years from now, but that's ok, i am admit
it,
i am test suite addicted

Regards

ps. i have a bug in my app with the cursor shapes (once the mouse enter in
some graph objects bounds does not change to hand-cursor but to
wait-cursor), have you got any idea how can i write a test suite for it ?
 
N

Niall

Alvin Bruney said:
One final question. And this would be the final one.
You
actually
seriously
no jokingly
the-boss-is-not-over-your-shoulder-so-you-can-speak-freely

write unit tests before you start to write code?
write unit tests before you start to debug broken code?

I can't say that I do it all the time. I do it mostly, but not always. In
the cases where I don't write the test first, it's usually because what I'm
doing is like a "spike" as seen in XP. If you don't know what that means,
it's just when you have an idea about something you want to do, and you're
having a quick test of the code to see what kind of shape the implementation
would take.

So if my manager comes to me and says that something should work
differently, and it's not obvious how to implement that, I'll have a think,
come up with an idea, and try it out. If it looks like that idea is going to
be the one I want to use, then I write the test for it, then go back and
clean up/finish off the implementation. This kind of approach may not be
appealing to those who have a passionate dislike of Edit and Continue :p

The one area I find it difficult, and not overly beneficial to test is
highly GUI dependent work. It can be a real pain to actually get the
information you need about your GUI object to be able to tell if it's
working as intended or not. So what we aim for is forms that have as little
code in them as possible, quite often no extra code on top of the designer
generated region. Then you can test the object that's underneath the form (a
business object/collection of them) as much as you like. If I do have to
write some code in the form that's non trivial, and cannot be moved
elsewhere, I do my best to set up a test for it. This still leaves a slight
gap in the unit testing, but the thinner you make your forms, the thinner
the gap.
if this is true you need a medal. plain and simple.
waiting for that serious reply.

PS. If it is any consolation Skeet doesn't do unit tests before he fires up
the debugger either. He says so on his website ;-)

I think unit tests are at their most beneficial when written first. However,
sometimes that's not how the mind (well, mine at least) works. Especially if
I'm making modifications to someone else's code, sometimes I want to have a
short mess around first to get a feel for how I want to do the work. But
before I get anything substantial done, I try to get a unit test down.

So my approach isn't perfect, but I doubt anyone's is :p

Niall
 
J

Jon Skeet

Alvin Bruney said:
One final question. And this would be the final one.
You
actually
seriously
no jokingly
the-boss-is-not-over-your-shoulder-so-you-can-speak-freely

write unit tests before you start to write code?
write unit tests before you start to debug broken code?

if this is true you need a medal. plain and simple.
waiting for that serious reply.

PS. If it is any consolation Skeet doesn't do unit tests before he fires up
the debugger either. He says so on his website ;-)

I rarely fire up the debugger at all. Usually when code is broken, I
fix it by adding a couple of trace elements or just by inspection.

I freely acknowledge that I don't use unit tests nearly as much as I
should. I wish I did. But then, do you really, seriously, no boss
involved, step through absolutely *every* line of code? Every single
error path, no matter how unlikely?
 
B

Bob Jenkins

Niall said:
In this case, you need unit testing as you say. If you didn't have unit
testing, you'd have no idea whether the rest of the system still works after
you change your code. The thing is... if you're prepared to use unit testing
for ensuring the safety of the whole of the system when central code is
changed, why are you not prepared to use it to ensure the safety of the code
that you're actually changing. Ideally, if there's a bug, you should write a
test which fails because of the bug, and then fix it.

Just add to the unit test, or add to the unit test and walk through
the new code as the relevant part of the test is run as well?

I recall once when I just added to the unit test. It showed what I'd
done gave the correct result. Another developer insisted on walking
through the code as the test was run, which I thought was pointless,
since it produced the correct result. But walking through the code
revealed that there was a graph walk that was taking exponential time
because of bugs in my conditions for which nodes to recurse on.

Some programs have a lot going on that doesn't show up in the results,
but is visible when you step through the code.
 
A

Alvin Bruney

good point
Bob Jenkins said:
"Niall" <[email protected]> wrote in message

Just add to the unit test, or add to the unit test and walk through
the new code as the relevant part of the test is run as well?

I recall once when I just added to the unit test. It showed what I'd
done gave the correct result. Another developer insisted on walking
through the code as the test was run, which I thought was pointless,
since it produced the correct result. But walking through the code
revealed that there was a graph walk that was taking exponential time
because of bugs in my conditions for which nodes to recurse on.

Some programs have a lot going on that doesn't show up in the results,
but is visible when you step through the code.
 
N

Niall

Add to the unit test and run it to check that the result shows that the bug
has been located. In the case that I'm fixing a bug, I take the situation
that causes the bug and put it in a test, then run the test to make sure
that it fails, and hence is correctly trapping the bug. Then I step through
the function and work out why it's not working as intended. At this time,
you may become aware of more conditions in which this bug shows itself, and
hence you add to your unit test. Then you make your fix, and run the unit
test. If you're especially paranoid, you can step through the code again at
this point. But if you have written your unit test properly, there is no
gain to be made by stepping through the code. In fact, if you can pick up
bugs in stepping through your code that your test cannot pick up, then you
have not written a complete test.

If it's performance you're concerned about, do you walk all your code just
to see how it performs? You could walk through each line in a function, make
sure it's not looping one too many times or some such thing. But you will
have no idea whether your function is being called too many times by other
parts of the system unless you do a full code walkthrough.

When I code, I keep a general mind towards the performance of my code. But I
don't spend too much time trying to make sure it runs as fast as possible,
because it can lead to less clear code, and I can be wasting time making
code that runs in a negligable time run in even less negligable time. If I
notice something that is too slow, I will profile it. Every now and then, I
do general profiles on the system (both performance and memory usage) to see
what kind of a state we're in. If anything stands out, I have a look at it,
profile more specifically, and go from there.

I know this goes against the older style of coding of not "Can you get blood
from a stone?" but "How much more blood can you get from this stone if you
squeeze a little harder?", and no doubt a lot of people disagree with this
mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and
there is no noticeable lack of performance, why do you need to step through
your code? Stepping through code for performance checking can be insightful,
but if you don't know that it actually causes a performance problem for the
end user, it is often random, aimless and virtually fruitless.

Niall
 
B

Bob Jenkins

Niall said:
I know this goes against the older style of coding of not "Can you get blood
from a stone?" but "How much more blood can you get from this stone if you
squeeze a little harder?", and no doubt a lot of people disagree with this
mindset, but... If the code you have written fully and correctly meets
business purposes (and you can prove this through passing unit tests), and
there is no noticeable lack of performance, why do you need to step through
your code? Stepping through code for performance checking can be insightful,
but if you don't know that it actually causes a performance problem for the
end user, it is often random, aimless and virtually fruitless.

Good point -- any testing method has to be monitored to see how useful
it is in the current situation.

In the situation I deal with (debugging other people's code in a large
old product), walking through the code finds bugs quite easily, easier
than I can find them by probing with testcases or just reading the
code. For me, walking through code is sort of a directed code review.
It catches things not so much because the code is going down the
wrong path, but because it focuses my attention on the active pieces
of the code and lets me verify variables are actually set. Expensive
things in inner loops, uninitialized variables, conditions that don't
make sense but happen to come out right for the current case,
realizing the current routine shouldn't be hit at all, noticing flags
aren't set the way I thought they were -- all that I catch by walking
through the code.

80% of the time I walk through my code after it's fixed my testcase,
and about 95% of the time I find something I did wrong, or was wrong
before my changes. I get nervous when I find everything went right
the first try.
 
N

Niall

I know what you mean. I'm not saying that the debugger is useless or a sin,
I just don't agree that it should be entrenched as "required practice" in
development, which is not what you're saying anyway (I think). Stepping
through the code can be helpful to see if you've designed your code badly,
but I don't think it's a requirement. If unit tests show that the program
does what it's supposed to, and its performance is acceptable, then you can
release it to the client without ever needing to step through the code.

But I agree that you can use the debugger to catch situations where code
could be improved. I don't have an attitude that "as long as it's working
and fast, then the code can be as bad/messy/whatever as you want", but at
the end of the day, such code/design tuning doesn't really bother the
client.

Incidentally, one thing I like to do every now and then to check up on
performance, is to run a profiler on the SQL Server to see all the queries
come in. You can quickly get a good idea for how much SQL you'll be
generating just by opening up some forms/doing some things. If you see the
same select query repeated in quick succession, you know you could probably
improve that. If I see that such a repeated query has a high duration, then
I go have a look.

Niall
 
M

Mark Pearce

Hi Niall,
performance is acceptable, then you can release it to the client without
ever needing to step through the code. <<

There are so many things wrong with this attitude that I don't know where to
start debugging your thought processes!

Unit tests can only test your current thinking about what your code should
be doing. Most developers can virtually guarantee that the first ideas about
what their code should be doing is faulty. One of the major tools for
correcting your ideas is to walk through your code in a debugger to watch
the code flow and data flow. Relying on unit tests is just intellectual
laziness, a prop for faulty thinking.

In addition, code coverage tools used to verify your unit tests are not very
useful. They can't identify what the code should be doing, or what it isn't
doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".

In my company, developers are required to walk through all of their code. If
I find a developer has missed a bug that would have been caught by walking
through the code, that developer is in trouble. If the same developer
continues to miss bugs through not walking through code, she/he is out of
the door. So far, this has only happened once - worked wonderfully for
"encourager les autres" :)

Regards,

Mark
--
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128


I know what you mean. I'm not saying that the debugger is useless or a sin,
I just don't agree that it should be entrenched as "required practice" in
development, which is not what you're saying anyway (I think). Stepping
through the code can be helpful to see if you've designed your code badly,
but I don't think it's a requirement. If unit tests show that the program
does what it's supposed to, and its performance is acceptable, then you can
release it to the client without ever needing to step through the code.

But I agree that you can use the debugger to catch situations where code
could be improved. I don't have an attitude that "as long as it's working
and fast, then the code can be as bad/messy/whatever as you want", but at
the end of the day, such code/design tuning doesn't really bother the
client.

Incidentally, one thing I like to do every now and then to check up on
performance, is to run a profiler on the SQL Server to see all the queries
come in. You can quickly get a good idea for how much SQL you'll be
generating just by opening up some forms/doing some things. If you see the
same select query repeated in quick succession, you know you could probably
improve that. If I see that such a repeated query has a high duration, then
I go have a look.

Niall
 
M

Mark Pearce

Hi Jon,

I quite agree that there is no silver bullet. Unit tests on their own,
whether manual or automated, won't find all of your bugs. Nor will stepping
through your code, or performing code inspections. Each of these techniques
on their own will find a specific set of bugs, but only by combining all of
them together can you have some confidence in your code.

My criticism is that Niall's attitude puts all the emphasis on unit tests
backed by code coverage, and this simply doesn't work in isolation.

Here are a couple of papers that I found useful in clarifying my thinking
when doing research for my book. The first looks at subtleties in using code
coverage tools, the second looks at problems with unit test automation.

http://www.testing.com/writings/coverage.pdf
http://www.satisfice.com/articles/test_automation_snake_oil.pdf

Mark
--
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128


Mark Pearce said:
In addition, code coverage tools used to verify your unit tests are not very
useful. They can't identify what the code should be doing, or what it isn't
doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".

This is a straw man. Clearly anyone who, when presented with a problem
says that it can't be in their code because their unit tests are
infallible is crazy. I could present the other straw man, the developer
who says "But there can't be a bug: I've walked through all the code
and it did what it should."

When presented with a problem, you verify that it really *is* a
problem, write a unit test which shows that it's a problem (and check
that that unit test fails on the current code), then fix the code and
check that the unit test now passes.
 
A

Alvin Bruney

You know Harry, this is really true. I have a C++ background and if I can't
step thru that sucker to see what's going on, I get spooked because so many
things can go wrong in C++ and not show it's butt in a unit test. I think
that's probably less of a problem in .NET but it's a habit for me that i
can't shake off. Just today this guy thought my code had a bug, while he was
explaining i was stepping with my debugger (not really listening to him
while he was yapping ha ha) i was able to determine a little before he was
finished yapping that he was doing something wrong. what can i say, old
habits are as bad as good habits. i'll admit i have terrible habits.
 
J

Jon Skeet

Alvin Bruney said:
You know Harry, this is really true. I have a C++ background and if I can't
step thru that sucker to see what's going on, I get spooked because so many
things can go wrong in C++ and not show it's butt in a unit test. I think
that's probably less of a problem in .NET but it's a habit for me that i
can't shake off.

That makes a lot more sense. One of the things I was thinking about
when reading this thread is that if your code is sufficiently complex
that just looking at it, it isn't crystal clear what's going on (so
that peer review can consist just of looking at the code) then it
should almost certainly be simplified. The mark of a great engineer
isn't that he produces complex code which a less able engineer can't
understand, but that he can see his way clear to solving a complex
problem by writing code that everyone can understand :)

Of course, the various bugs in the VS.NET 2002 debugger make me
somewhat more wary of stepping through code as well... I tend to regard
stepping through the code in a debugger as a last resort as it only
shows you what's happening *this* time. I'd rather take time to look at
the whole code of a sequence and understand it in a more global sense,
and then work out why it's not behaving as it should.
 
H

Harry Bosch

Jon Skeet said:
That makes a lot more sense. One of the things I was thinking about
when reading this thread is that if your code is sufficiently complex
that just looking at it, it isn't crystal clear what's going on (so
that peer review can consist just of looking at the code) then it
should almost certainly be simplified. The mark of a great engineer
isn't that he produces complex code which a less able engineer can't
understand, but that he can see his way clear to solving a complex
problem by writing code that everyone can understand :)

You hit the nail on the head. The point about being able to just look at
the code and see that it is (or is not) correct is the goal of code
clarity and simplification. And if you can write code that others can
understand clearly, you're that much better at it.

One thing I always hated when reviewing C++ code is those horrible string
loops some people loved to write. You know the kind, where it's a mass of
pointer variables with nested and's and or's, pre-/post-
increment/decrement operators, and inline assignments, all shoved into as
little space as possible. And most of the time, there was something in
the standard library that would do what the messy code was attempting, or
at least something that could simplify it. You can't look at code like
that and know if it is correct or not, you have to trace through it and
track the pointer values as you go, and check for ALL of the null
pointer, past-the-end, and off-by-one errors. And the programmer would
justify it as being "efficient" :) Things like that belong in a library,
written and debugged once (and optimized, if deemed necessary). And the
standard classes and libraries have most of this already.

As a funny aside, when I interviewed at Microsoft I was asked to write
some super-efficient string handling code at the whiteboard. I had to
laugh (to myself, of course :) -- of all the things I could have been
asked about! OTOH, I also had some fascinating discussions with some of
the PM's, who were quite sharp and knowledgeable, so it was not all
inappropriate.
 
N

Niall

There's not too much in here that I haven't already said before in this
thread, so I'll try to be brief (this time, for once :p)

Mark Pearce said:
Hi Niall,

performance is acceptable, then you can release it to the client without
ever needing to step through the code. <<

There are so many things wrong with this attitude that I don't know where to
start debugging your thought processes!

I'm surprised it took so long for someone to come out of the woodwork and
say something like this :p

Unit tests can only test your current thinking about what your code should
be doing. Most developers can virtually guarantee that the first ideas about
what their code should be doing is faulty. One of the major tools for
correcting your ideas is to walk through your code in a debugger to watch
the code flow and data flow. Relying on unit tests is just intellectual
laziness, a prop for faulty thinking.

Unit tests are evolved as your requirements are evolved. Just as unit tests
represent your current thinking, so does your step through debugging. If you
don't know that your function should behave in X manner yet, you won't write
a unit test to ensure it does, and you won't realise it's not doing that
when you step through your code. In most cases, anything you are
subconsciously checking as you step through your code can and should become
an explicit test. If you do this, you make sure everyone has the same ideas
about what the code should be doing.

The only time I think this can be a problem is if you're dealing with API
stuff, it might be pretty difficult to test what's going on outside of your
control.

In addition, code coverage tools used to verify your unit tests are not very
useful. They can't identify what the code should be doing, or what it isn't
doing. But developers tend to use these tools to say "But there can't be a
bug: my unit tests are infallible and my code coverage tool verified that
all paths have been executed".

But the tools can be examined. If someone has some buggy code, and has a
unit test which should have picked that up, you can look at what happened
and work things out, improve the test. If someone has missed some buggy
functionality in a step through debugging, then by the time the bug is
discovered, no one has any idea why the bug wasn't originally caught. At
least with unit tests, you can improve their usage through examining
mistakes.

As you say, there's no silver bullet. But I still contend that it's a rare
case where you can pick something up with a debugger that you wouldn't have
picked up if you correctly used unit tests. And (as I've said heaps of times
before), unit tests have a lot of other advantages apart from just ensuring
that code is correct.

You say that relying on unit tests is laziness. At my company, not writing
the tests is laziness. It's easy to write some code, have a step through it,
bring up the form that uses it and toy around with it for a bit. But if you
don't stop to document what behaviour you want and how to push the code and
make sure it works properly, then only the original programmer benefits from
that testing, and they only benefit the once - when they do the test. After
that, there's no persisting benefit of the debug step through. I don't have
some kind of a rule saying I won't use the debugger, I just don't use it to
replace a unit test.

Man, if I could have written this much so easily at high school, I would
have scored much better in those english essays..

Niall
 
J

Jeff Ogata

Hi Niall,

I'm actually not going to get in between you and Mark here ;), I just had a
couple of quick questions on getting started with unit testing.

Can you point me to any resources that will give me quick overview of unit
testing? Are there any tools that you recommend? I've briefly looked at
NUnit -- is that the way to go?

Thanks,
Jeff
 
N

Niall

I don't know too many sites talking about unit testing, but it gets lots of
hits on google.

There is: http://www.extremeprogramming.org/rules/unittests.html which is
fairly generalistic, but it gives links to other things on the site related
to the unit tests and how they can be used.

There is also http://www.c2.com/cgi/wiki?UnitTest. On this site, they talk
about XP style unit tests being called "Programmer Tests", though I have
never heard them referred to as such before. There is heaps of information
on lots of things on this site, though I find I can wander for a long time
through all the links.

This is the first job I've had where unit tests are used, so I've only used
one framework, which is NUnit. We actually run on the older version of NUnit
because we've been using it for about 2 years, and we've modified the code
(it's open source, which is useful) significantly to meet our needs. As a
result, migrating to the newer NUnit would be a decent bit of work, and what
we have now does the trick. The new NUnit has a few features which are quite
handy, like the ability to unload the assembly while the test rig is
running, so you can run your test, go fix the code and run again without
having to restart the test app. So yeah, NUnit is pretty good. There may be
better ones out there, I'm not really sure.

Niall
 
N

Niall

This is perhaps the crux of where we differ - your attitude seems to be
that
your main responsibility is to ensure that the code you're testing meets the
stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or mistakes
in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.

What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function
a few steps up the call chain had an optimisation that allowed it sometimes
to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in
some inputs, got correct outputs and entirely missed that fact that his new
code wasn't being executed at all.

This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works
fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in
the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would
have pointed to the omission in requirements as well as finding the bug.

I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...

Niall
 
A

Alvin Bruney

most important thing - a program which does what the customer wants. At
the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works.

that stopped being true years ago. customers now want their source code, or
even their inhouse programmers to look at the code. they are getting wiser
about having it done properly in the first place so high priced programmers
don't bleed them dry trying to make bread out sphaghetti. spahgetti.
spahghetti. i know it doesn't look right, i just can't remember how to spell
it.
 
M

Mark Pearce

Hi Niall,
requirements if you don't test it against the requirements? <<

Of course I test against requirements, but for me, that's just the first
step. After that, I start testing to find requirements bugs, design bugs,
implementation bugs, etc.
thing - a program which does what the customer wants. At the end of the day,
the customer doesn't care if you have spaghetti code, as long as the program
works. <<

If you're doing custom software development, your customer certainly cares
deeply about the internal quality of the code. The customer's staff will
have to debug, maintain and enhance the code for months and years to come.
If you're doing product development, the end-customers may not care about
the internal quality of the code, but the company for which you're
developing the product certainly cares. Once again, the company will have to
debug, maintain and enhance the product for a long time.

So somebody, somewhere, *always* cares about the code's internal quality.
facilitate design changes. <<

Here at least we agree on something!
<<

The developer in question thought that he *was* targeting a specific area of
code. He didn't know about the optimisation, and indeed had no way of
knowing about the optimisation without stepping through the code in a
source-level debugger.

Of course in hindsight, it should have been coded better. But you've just
been arguing that the internal code quality doesn't really matter, as long
as the unit tests are satisfied. You can't have it both ways.
planning to wrap the exception in a more meaningful one and re-throw. <<

There are various reasons to catch System.Exception, including the one you
mentioned. Another reason is to reverse a transaction after any exception.
Yet another reason is that many developers often put in a System.Exception
catch during testing, and forget to remove it. And yet another reason is
that developers sometimes don't realise that you shouldn't catch
System.Exception without rethrowing it. Finally, a few of the .NET base
class methods suppress all exceptions for "security" reasons, and just fail
silently.

Whatever the reason, you won't find this type of requirements/coding bug
unless you step through the code and find that an unexpected exception type
was being silently thrown and caught.

Regards,

Mark
----
Author of "Comprehensive VB .NET Debugging"
http://www.apress.com/book/bookDisplay.html?bID=128


Niall said:
This is perhaps the crux of where we differ - your attitude seems to be that
your main responsibility is to ensure that the code you're testing meets the
stated requirements. In my case, that's the very least of what I'm looking
for - indeed, I am usually surprised if my code has this type of bug, so I
don't tend to spend the majority of my time looking for it.

Instead, the majority of my time is spent looking for omissions or mistakes
in the requirements, and for design bugs and implementation mistakes.
Stepping though my code is one essential technique here, and so is the use
of code reviews. Studies show that each of these techniques finds a
different set of bugs.

What do you mean by implementation mistakes? Do you mean mistakes such that
the code doesn't do what it's supposed to do, or mistakes such as slower
code, messy code, etc? What I'm wondering is how you can be sure that your
code fully meets the requirements if you don't test it against the
requirements?

I agree that bad design, slower code isn't really the domain of unit
testing. With the performance thing, if a particular function has to run in
less than a certain amount of time, you can always write a test that fails
if it takes longer. I guess the attitude of the unit test is to give you the
most important thing - a program which does what the customer wants. At the
end of the day, the customer doesn't care if you have spaghetti code, as
long as the program works. On the other hand, if you have a great design,
but your program doesn't do what they want, then they won't be pleased.

I'm not advocating a mindset of "It works, meets the requirements fully, so
lock it away and it doesn't matter if it has a bad design." In fact, it's a
bit the opposite. Once you have the unit tests solidly down, then you know
your code meets the requirements, and you know that if something breaks, you
will find out about it. So at any time, anyone can come along and change the
code to what they think is a better design, faster, etc. So the unit test
doesn't test design, but it gives you a safeguard to facilitate design
changes.

To their surprise, the breakpoint was never hit and the test completed
successfully. A quick investigation with the debugger showed that a function
a few steps up the call chain had an optimisation that allowed it sometimes
to skip unnecessary work. In this case, it skipped the new code. Treating
the code as a black box in this case just didn't work - the developer put in
some inputs, got correct outputs and entirely missed that fact that his new
code wasn't being executed at all.

This isn't the way I write unit tests, and as far as I've seen, it's not the
way that they're supposed to be written, either. The unit test is supposed
to isolate and target specific areas of code. So there should be a test that
specifically targets the function in question, ignoring the optimisation. As
for the optimisation in the other method - there should be a unit test that
specifically targets that as well. Unit tests which test at a higher level
are ok too, and probably a good idea to watch the behaviour of the object at
a higher level as well, but like I said before, unit testing is under the
hood testing.

Here's another example, closer to home. The following code shows a nasty
bug:

bool AccessGranted = true;

try
{
// See if we have access to c:\test.txt
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
}
catch (SecurityException x)
{
// access denied
AccessGranted = false;
}
catch (...)
{
// something else happened
}

If the CLR grants access to the test file in this example, everything works
fine. If the CLR denies access, everything works fine as well because a
SecurityException is thrown. But what happens if the discretionary access
control list (DACL) on the file doesn't grant access? Then a different
exception is thrown, but AccessGranted will return true because of the
optimistic assumption made on the first line of code. The bug was really in
the requirements as well as in the code, because they didn't state what
should happen if the CLR granted access, but the file system didn't.
Stepping through this code would have shown that a completely different
exception was being thrown when the DACL denied access, and therefore would
have pointed to the omission in requirements as well as finding the bug.

I think this could have been coded better. In general, when you have code
that is trying to discover if you can do a certain thing, it should presume
the negative in the beginning. Especially for things like "Do I have the
security rights to do <xyz>" it should always be false in the beginning.
That way you ensure that the only time you ever think you have the right to
do the action is when the code that recogises the affirmative is run.

I'm not sure if you mean the base Exception type by the "..." in your code.
I wouldn't write code that catches just a plain exception unless it was
planning to wrap the exception in a more meaningful one and re-throw. If you
didn't mean to catch the base Exception type there, then the DACL exception
wouldn't be caught, and the unit test would show an error. This is the way
NUnit behaves - if an assertion fails, you get a test failure, if an
exception escapes from the test, you get a test error.

Personally, I would have written the function like:

bool AccessGranted = false;
try
{
new FileStream(@"c:\test.txt",
FileMode.Open,
FileAccess.Read).Close();
AccessGranted = true;
}
catch (SecurityException)
{
}

But, of course, an even more powerful tool than unit testing is hindsight...

Niall
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top