Is This Overkill?

M

Mike Hofer

I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!
 
P

PJ6

I will be very interested to see others' responses to this, but here's my
$.02.

While it is not always practicable or appropriate to do so, tasks of this
nature are best managed with a database.

Yes, I'm leaving a mountain unsaid with this statement, most notably
answering the question "how", but I would recommend this idea and
aspect-oriented programming to you for further study.

Paul
 
R

Ray Cassick

Mike Hofer said:
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

For instance, consider the following block:

If siteId <= 0 Then
Throw New ArgumentOutOfRangeException("siteId")
End If
If name Is Nothing Then
Throw New ArgumentNullException("name")
End If
If name = String.Empty Then
Throw New ArgumentException("Empty strings are not permitted.",
"name")
End If
If buildingIdToIgnore < -1 Then
Throw New ArgumentOutOfRangeException("buildingIdToIgnore")
End If

Sometimes, this code is collapsed as follows:

If siteId <= 0 Then Throw New
ArgumentOutOfRangeException("siteId")
If name Is Nothing Then Throw New ArgumentNullException("name")
If name = String.Empty Then Throw New ArgumentException("Empty
strings are not permitted.", "name")
If buildingIdToIgnore < -1 Then Throw New
ArgumentOutOfRangeException("buildingIdToIgnore")

Sometimes, it's omitted altogether. Somtimes, not all of the
parameters to the procedure are validated. And in many cases, the
messages aren't consistent from one parameter validation to the next.
(Resource files would go a long way towards solving that, but that's a
whole different thread.)

What I've been pondering for some time is a way to make it easier to
validate method parameters. My *theory* is that it's a lot of gangly
code to write If...Then...Throw...End If, which leads to short-cuts
and, eventually, outright omissions. So, a way to abbreviate that and
make the code clearer might help to encourage consistent use of
parameter validation, which should (in theory) improve code quality
throughout the system.

So I have this model I'm working on, and before I go completely off
the deep end with it, I thought I'd run it by you to see what you
thought of it.

Essentially, it's based on NUnit asserts to validate parameters
consistently and clearly. So, the code above is changed to this:

Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()

My tests for the model are going very well. I think that once I get it
going, it could provide a very cool, extensible validation framework
for parameters. The "framework" is in a separate class library, and
the messages are in a string resource file, providing consistency.

The big caveat, of course, is that there's no guarantee that it solves
the *enforcement* problem. There's also the problem that FxCop thinks
I'm not validating parameters when I am.

So the question I have is, is this even worth doing? When an existing
code base is refactored to use it, it looks much cleaner, and appears
to be easier to read and maintain. But the question remains: is this a
real problem, or am I addressing an issue that doesn't really exist?

Your input is greatly appreciated.

Thanks!

I like the idea and have thought about doing it myself at some point, just
never got around to it.



I might take it a bit further and somehow move the validation to an engine
scheme type of approach where rules for validation could be stored perhaps
in an XML file and then parameters would be passed in along with their
function names. This would give you a central location to store rules about
what is valid and what is not as well as the ability to perhaps alter
validation schemes dynamically without altering code if needed in the
future.



It may also, depending how it was architected, allow you to better control
rules for values that may be dependent upon other values that have been
passed into the function without having to create a bunch of nested
statements in the code body itself.



No matter what you do though, enforcement of its use is always going to be
an issue. Unless you stand over the shoulder or developers as they code, the
only real way to catch non-compliance would be code reviews.
 
J

Jon Skeet [C# MVP]

Mike Hofer said:
I've been maintaining some source code, and I've noticed some radical
inconsistency in the code, particularly in the way that parameters are
validated. I've noticed that this seems to have been pretty much the
case throughout my career.

<snip>

You might want to look at Spec# for inspiration.
http://research.microsoft.com/specsharp/

If you don't mind the stack trace having one more line than is terribly
helpful, you can have a lot of useful exception-throwing validation
methods, e.g.

ValidateNonNull (name, "name");
ValidateRange (siteId > 0, "siteId");

etc
 
M

Mike Hofer

<snip>

You might want to look at Spec# for inspiration.http://research.microsoft.com/specsharp/

If you don't mind the stack trace having one more line than is terribly
helpful, you can have a lot of useful exception-throwing validation
methods, e.g.

ValidateNonNull (name, "name");
ValidateRange (siteId > 0, "siteId");

etc

SpecSharp looks very interesting; my problem is that I'm dealing with
lots of VB.NET code, so I'm not sure how useful it will be. I'll keep
looking at it.

The extra items on the stack trace don't really bug me. I think that
the added value of clearer code offsets that tremendously (but that's
just my opinion, and is subject, per usual, to the usual rules
governing opinions).

The "framework" I'm developing does, in fact throw standard exceptions
if the tests don't pass. For example, the standard validator class for
a connection looks like this:

Option Explicit On
Option Strict On

Imports System.Data

Namespace Validation

Public Class ConnectionValidator
Inherits ParameterValidatorBase

Friend Sub New(ByVal connection As IDbConnection, ByVal name As
String)
MyBase.New(connection, name)
End Sub

Public Sub IsNotNull()
If Connection Is Nothing Then
Throw New ArgumentNullException(Name)
End If
End Sub

Public Sub IsOpen()
If Not Connection Is Nothing Then
If (Connection.State And ConnectionState.Open) = 0 Then
Throw New _
ArgumentException("Operation requires an open
connection.", _
Name)
End If
End If
End Sub

Public Sub IsOpenAndIsNotNull()
IsNotNull()
IsOpen()
End Sub

Private ReadOnly Property Connection() As IDbConnection
Get
Return DirectCast(InnerValue, IDbConnection)
End Get
End Property

End Class

End Namespace

A helper class simplifies invocation as follows:

Option Explicit On
Option Strict On

Imports System.Data.SqlClient

Namespace Validation

Public Class Validate

Public Shared Function That(ByVal value As IDbConnection, _
ByVal parameterName As String) As ConnectionValidator

Return New ConnectionValidator(value, parameterName)

End Function

End Class

End Namespace

(I've snipped many of the other methods to simplify the sample.) While
this class isn't very extensible in it's current incarnation, it's
just a proof of concept to verify that it works. (I'm going for The
Simplest Thing That Works at this point, just to verify that the
interface I want works.)

The end result is code that looks like this:

Validate.That(connection, "connection").IsValid()

which tests that the connection is not null and is open. If either
rule fails, an appropriate exception is thrown, with minimal overhead
on the stack trace.

I've got different validator classes for each of the known core data
types (boolean, byte, char, date, decimal, double, integer, long,
etc.) and the database classes at this point. I also included one for
Enum and Object. These are the classes I use most commonly. The only
problem I'm running into is how the heck to make it extensible so that
validators can be created in other libraries and invoked via the
Validate class, making it extensible at run-time. I know that the
current implementation won't work that way. I'm thinking that I'll
have to make it open-source, which would allow developers to extend it
at their leisure, but that still doesn't present the best solution.

It's been a long day, and I'm just having a tough time wrapping my
brain around the extensibility aspects of it. :)

Overall, however, I'm *really* liking the effects its having on the
refactored test code. Once the validators are debugged, they work
*exceptionally* well (pardon the pun).
 
T

Tom Leylan

Hi Mike: I like the basic idea. If I'm reading it right I have a couple of
ideas.

First, note that when you pass a connection you don't send a property. It
can be considered implied (or simply not needed) but it results in the
syntax being different. If there is an "open" property the syntax could be
changed to Validate.That( connection, "Open").IsTrue() to maintain an
"object, property" style. Without it do you introduce more situations where
the developer needs to stop and think "what is the test, what are the
parameters?"

Alternatively the siteId example could be: Validate.That(
siteId ).SiteIdIsValid()? This trades off having to send a second parameter
for more methods. Frankly I'd rather have more methods than have to send
hard to remember (and strings at that) property names. Your
IsOpenAndIsNotNull example demonstrates that a test doesn't need to be
applied to a particular property. I also would (probably) (as a developer)
not want to have to remember all the validation tests that apply to the
"siteId". I'd like to know if it was correct or not but perhaps not want to
have to remember to issue the IsNotNull() test in advance of some other
tests.

The buildingIdToIgnore example is suspect as it places "business logic" into
the object user's hands. Should people testing whether the BuildingId is
valid be required to know that it should be greater than -2? If
IsGreaterThan is a method that the buildingIdToIgnore class should provide
why wouldn't it just report back "it isn't valid"? So it reports "yes" it
is greater than 2 but the rule is it has to be less than 5 as well.

Finally... you may not want to pass a particular object (and a property) in
every case unless the only use is to check the current state of an active
object. It seems to me that testing values "prior to assignment" would be
helpful as well. So the UI developer could check if -5 is a valid
buildingIdToIgnore before actually assigning it. The rules apply to an
instance of an object but also to anybody who might like to know if a
particular value is "valid".

Does this help at all?

Tom
 
K

KKS

In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

regards
Kjetil Kristoffer Solberg
 
N

Nicole Calinoiu

When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method. If you defer validation to a helper method, the call stack will
make it look like the ArgumentException was thrown by the helper method in
response to a call from the original target method, and the calling
developer will assume the problem is your fault rather than his.

Obviously, it can be quite helpful to defer complex validation to helper
methods, but any resulting ArgumentException should still be thrown from the
target method. e.g.:

If Not Validate.That(connection).IsOpenAndIsNotNull() Then Throw New
ArgumentException...
 
J

Jon Skeet [C# MVP]

In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

I disagree completely:

1) The most elegant and simple way should be used wherever performance
hasn't proved itself to be an issue, IMO. Calling a few methods is
unlikely to "kill" the performance of the original method, and if at
any point the method does any IO (file, socket etc) that's almost
certainly going to dwarf the validation costs.

2) Validation code should be robust but without getting in the way of
making it clear to the reader of the code what the "meat" of the method
is doing - what the real purpose is. Spending 12 lines doing validation
where 3 concise, readable lines will do is not a good idea.
 
J

Jon Skeet [C# MVP]

When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method.

I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it. The
framework certainly doesn't. For instance:

using System;
using System.Collections.Generic;

class Test
{
static void Main()
{
try
{
List<string> x = new List<string>();
x[5] = null;
}
catch (Exception e)
{
Console.WriteLine (e);
}
}
}

displays (on my box):

System.ArgumentOutOfRangeException: Index was out of range. Must be
non-negative
and less than the size of the collection.
Parameter name: index
at System.ThrowHelper.ThrowArgumentOutOfRangeException
(ExceptionArgument argument, ExceptionResource resource)
at System.ThrowHelper.ThrowArgumentOutOfRangeException()
at System.Collections.Generic.List`1.set_Item(Int32 index, T value)
at Test.Main()

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.
 
D

dbahooker

I don't think that we should ever use 'XML' and 'Architected' in the
same sentence

XML is for retards that don't have the capacity to learn SQL


it's SLOWER and LARGER with no tangible benefits
 
D

dbahooker

sorry kid you're full of shit

databases are the fastest things in the universe
maybe you need to stfu and learn Analysis Services
 
M

Mike Hofer

When validating method arguments, it's important that any ArgumentException
(and subclass) instances appear to have been thrown directly from the target
method. If you defer validation to a helper method, the call stack will
make it look like the ArgumentException was thrown by the helper method in
response to a call from the original target method, and the calling
developer will assume the problem is your fault rather than his.

Obviously, it can be quite helpful to defer complex validation to helper
methods, but any resulting ArgumentException should still be thrown from the
target method. e.g.:

If Not Validate.That(connection).IsOpenAndIsNotNull() Then Throw New
ArgumentException...




















- Show quoted text -

I'm not sure I agree wholeheartedly with this one, though I understand
the concern. The question is one of risk mitigation, and whether or
not the trade-off in code clarity is worth the additional entries in
the stack trace.

I tend to agree with Jon. Other classes in the framework add
additional entries to the stack trace. However, the fact that the
validation methods in this case only add two entries, and both of them
indicate that they are Validation failures, makes it clear that we're
dealing with a parameter validation. The classes and methods are
*very* clearly named. I *think* (dangerous word, I know) that any
competent programmer would be able to discern that the problem in
question was that the validation test had failed.

Further, one of the points of the framework is to refactor complex
validation code so that it's *easier* to do, encouraging its
implementation. If I do it the way you've shown, it doesn't appear
that it would be easier to do. I might even argue that it would be
harder, and the additional overhead in code would buy me nothing. In
that event, I might as well just step away from the keyboard, and drop
the project.

My goals here are to provide a streamlined means of validating
parameters that results in a consistent set of error messages, clear
code, with a minimal impact on the stack trace. I don't think that any
framework is going to be able to provide that without impacting the
stack trace, without resulting in lots of code at the point of call,
which defeats its purpose. I could be wrong, and am certainly willing
to entertain that notion.

I am definitely interested in a differing viewpoint. It may prove to
be true that the system would benefit from *both* ways of doing
things. So by all means, come back with a response, and help me to
build something great.

Thanks!
 
M

Mike Hofer

In my opinion you have to think of performance as well. Calling a lot of
methods will kill the performance of your original method. So even if these
methods are simple and might be inlined you are not guaranteed that they all
will be. Suggestions along the lines as using a database or xml will be even
worse performance wise. My suggestion to you is to use what you had
originally. I don't think it looks ugly at all.

regards
Kjetil Kristoffer Solberg




















- Show quoted text -

Performance is a concern; it's one of the reasons I ruled out
reflection as a way to do it, and also pitched XML templates (parsing
the DOM tends to be slow). I'm also not doing any file IO or database
access.

The method implementations are as small, compact, and simple as can
be. They perform a concrete test and, if the test fails, throw an
exception right away.

The first implementation actually used lots of individual objects for
each test. Each test was represented by a test class that implemented
an interface, and the test was passed to a method that implemented the
interface's sole method.

However, that resulted in a lot of objects being created. I didn't
want to negatively impact performance that way, so I simply settled
for a sealed class that has static methods that return the validator
classes. (It's essentially a factory; the That method is overloaded to
provide strongly typed versions that prevent boxing and unboxing of
value types).

The validator classes are designed around the base data types and the
interfaces I find I commonly use. (Wherever possible, I use interfaces
instead of concrete classes.) The methods on the validator classes are
very small, and simply perform the test, and throw an exception if it
fails, using standard error messages that are stored in a resource
file, providing a high degree of consistency.

If I need to perform multiple tests on a single parameter in VB, I can
now do this:

With Validate.That(myParameter, "myParameter")
.IsNotNull()
.IsInRange(minValue, maxValue)
End With

So yes, performance is a consideration. I haven't forgotten that. And
if I can find any other ways to improve it, I certainly will. I'm open
to suggestions, so fire away!
 
J

Jon Skeet [C# MVP]

Performance is a concern; it's one of the reasons I ruled out
reflection as a way to do it, and also pitched XML templates (parsing
the DOM tends to be slow). I'm also not doing any file IO or database
access.

Reflection I can see being a potentially significant performance hit -
but I can't see that a couple of extra method calls which don't need to
do a lot in the success case will cause much performance loss.

You could always profile it, but unless you're really doing *very*
little "real" work in the methods, *and* calling them very often
(really, really often) I doubt you'd see anything.

You could always make the methods compile-time conditional (using
ConditionalAttribute), so that it's easy to repeatedly do performance
tests with and without any validation. That should make it easy to
determine whether or not the impact is significant.
 
M

Mike Hofer

Reflection I can see being a potentially significant performance hit -
but I can't see that a couple of extra method calls which don't need to
do a lot in the success case will cause much performance loss.

You could always profile it, but unless you're really doing *very*
little "real" work in the methods, *and* calling them very often
(really, really often) I doubt you'd see anything.

You could always make the methods compile-time conditional (using
ConditionalAttribute), so that it's easy to repeatedly do performance
tests with and without any validation. That should make it easy to
determine whether or not the impact is significant.

Thanks for the advice, Jon. I haven't noticed any significant
performance hit so far. Like I said, the functions are lean and mean,
and don't do anything except test and throw on failure. Unless I'm
doing a lot of recursion (not the case yet!), I don't see a problem.

I will run it through a profiler once I get it all set just so I can
see the difference. I'm fairly certain that a lot of this stuff can be
inlined, so it shouldn't be all that costly.
 
N

Nicole Calinoiu

Jon Skeet said:
I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it.

How important it might be really depends on the intended audience for your
API. If you're developing librairies for internal use within an
organization, and you have the opportunity to communicate directly with the
API users, then it may not be very important at all. However, if you're
producing an API for truly public use and can expect to have very
inexperienced developers as clients, then it may have a much greater impact.

The framework certainly doesn't.

Actually, in most places in the framework, the developers have gone to the
trouble of throwing directly from the method whose arguments are being
validated (see, for example, ArrayList.Item). The List<T>.Item
counter-example should probably be considered a bug. In fact, it's even
worse than you might expect since it's not actually deferring validation to
the helper method, but only the exception creation
and throwing ("If (index >= Me._size) Then
ThrowHelper.ThrowArgumentOutOfRangeException"). I suspect that the
developers might have taken the "consider using exception builder methods"
design guideline (see
http://msdn2.microsoft.com/en-us/library/ms229030.aspx) a little too far in
this particular case. At any rate, whatever the reason, the fact that
someone working on the BCL made what almost certainly amounts to a mistake
shouldn't be justification for the rest of us to get lazy. ;)

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.

If you were the only client of my APIs, I wouldn't worry about where the
exceptions were thrown either. :)
 
N

Nicole Calinoiu

Jon Skeet said:
I'm not sure that's particularly important. I'd say it's a "nice to
have" but I wouldn't make the validation code harder to ensure it.

How important it might be really depends on the intended audience for your
API. If you're developing librairies for internal use within an
organization, and you have the opportunity to communicate directly with the
API users, then it may not be very important at all. However, if you're
producing an API for truly public use and can expect to have very
inexperienced developers as clients, then it may have a much greater impact.

The framework certainly doesn't.

Actually, in most places in the framework, the developers have gone to the
trouble of throwing directly from the method whose arguments are being
validated (see, for example, ArrayList.Item). The List<T>.Item
counter-example should probably be considered a bug. In fact, it's even
worse than you might expect since it's not actually deferring validation to
the helper method, but only the exception creation
and throwing ("If (index >= Me._size) Then
ThrowHelper.ThrowArgumentOutOfRangeException"). I suspect that the
developers might have taken the "consider using exception builder methods"
design guideline (see
http://msdn2.microsoft.com/en-us/library/ms229030.aspx) a little too far in
this particular case. At any rate, whatever the reason, the fact that
someone working on the BCL made what almost certainly amounts to a mistake
shouldn't be justification for the rest of us to get lazy. ;)

There are two more levels than one might expect. I don't think that's
particularly confusing, so long as the message is very clear.

If you were the only client of my APIs, I wouldn't worry about where the
exceptions were thrown either. :)
 
N

Nicole Calinoiu

Mike Hofer said:
I'm not sure I agree wholeheartedly with this one, though I understand
the concern. The question is one of risk mitigation, and whether or
not the trade-off in code clarity is worth the additional entries in
the stack trace.

That's certainly true, and only you can decide which way the trade-off tips
for your particular API. However, the larger and more diverse the intended
developer audience for your API, the more emphasis you should be placing on
the interests of the API users versus those of the API developers.

I tend to agree with Jon. Other classes in the framework add
additional entries to the stack trace.

See my response to Jon. The cases where the framework does this are
probably not desirable.

However, the fact that the
validation methods in this case only add two entries, and both of them
indicate that they are Validation failures, makes it clear that we're
dealing with a parameter validation. The classes and methods are
*very* clearly named. I *think* (dangerous word, I know) that any
competent programmer would be able to discern that the problem in
question was that the validation test had failed.

Are you sure that all the developers in your intended API audience are all
that competent, experienced, etc.? If so, then you have no worries.

Further, one of the points of the framework is to refactor complex
validation code so that it's *easier* to do, encouraging its
implementation.

Where do you get that idea? If encouraging argument validation was a
serious goal for the folks on the CLR or language teams at Microsoft,
declarative validation would have made its way out of the Spec# research pit
quite some time ago, and they would never have permitted the introduction of
automatic properties without support for declarative validation. Heck, they
could have done as little as salvaging the XC# post-compiler
(http://www.resolvecorp.com), which seems to have suffered a largely ignored
demise sometime before Visual Studio 2005 was released.

Personally, I would argue that there is next to no framework support for
doing the right thing with respect to argument validation, but then I might
just be a bit bitter... ;)

If I do it the way you've shown, it doesn't appear
that it would be easier to do. I might even argue that it would be
harder, and the additional overhead in code would buy me nothing.

Again, if it actually buys you nothing because your intended audience are
sufficiently sophisticated, then it's probably not worthwhile. However,
it's not actually all that much more work to put the throws in the target
methods. Granted, it's irritating (there is a reason I'm bitter <g>), but
it's not exactly expensive.

My goals here are to provide a streamlined means of validating
parameters that results in a consistent set of error messages, clear
code, with a minimal impact on the stack trace. I don't think that any
framework is going to be able to provide that without impacting the
stack trace, without resulting in lots of code at the point of call,
which defeats its purpose. I could be wrong, and am certainly willing
to entertain that notion.

There's a way to avoid the impact on the stack trace, but it's terribly
inelegant and would require adding more code to the target method. However,
just for the sake of completeness, here it is...

Since the call stack for an exception is generated when it is thrown, your
target method would wrap its validation method calls in a try catch that
explicitly rethrows any caught argument exceptions. e.g.:

Try
Validate.That(siteId, "siteId").IsNotNegative()
Validate.That(name, "name").IsNotNullOrEmpty()
Validate.That(buildingIdToIgnore,
"buildingIdToIgnore").IsGreaterThan(-2)
Validate.That(connection).IsOpenAndIsNotNull()
Catch ex As ArgumentException
Throw ex
End Try

Personally, I think it's far less palatable than the alternatives, but
ymmv...
 
J

Jon Skeet [C# MVP]

<"Nicole Calinoiu" <calinoiu REMOVETHIS AT gmail DOT com>> wrote:

Personally, I would argue that there is next to no framework support for
doing the right thing with respect to argument validation, but then I might
just be a bit bitter... ;)

I agree wholeheartedly. Apart from the sad lack of Spec# progression,
it's a shame there's no way of applying an attribute to a method to say
"any exceptions thrown by this method should appear as if they're
thrown by the caller" which would make things a lot simpler.

I think it's just a balance thing - I think it's likely that if we ask
developers to do validation "the long way", they may well not bother,
and that's worse than exceptions having a couple of levels of stack too
many.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top