You can do that, you can also use separate databases for separate
groups of tests. We for example use separate databases for fetch
testing our o/r layer and saving data and another database for
combinations of the two. This way you don't lose a lot of time when
databases have to be cleaned up nor do you have to pre-define data to
insert, as that's already there (for the fetch database for example).
However, the biggest part of testing your DAL isn't about TDD. It's
about testing the expected edge cases of your design. I can't stress
enough how dumb it is to design a DAL using TDD, unless you want a dal
with a gazillion methods scattered all over the place and no uniform
way of data flow, graph management etc.
For example having a graph of customer, order, order details, product,
employee objects, how is that persisted? In what order? How do you
calculate that order? Are newly db-generated pk values synced at once,
or delayed? if an entity isn't 'dirty' (changed) but has an association
with an entity which is new and which will get a PK set in the DB when
it is saved, is it expected that this non-dirty entity is saved as
well? (yes). When will that fail?
You won't get to those problems when using TDD to access your data, as
you will program out what you need for the features USING the DAL.
That's ok, if your DAL is very small and your database has perhaps 3
tables, but if your database has 200 tables, you're not going to
succeed at all when you start with TDD and program your way down to the
database, simply because you soon enter a complex area where things
have to be solved in a generic way, but when you want to do so, you've
to abandone TDD altogether (read: you're forced to think up-front and
make design decisions up front, oh my) to get things setup generically.
That's not to say TDD doesn't work in this area. You can perfectly
fine design your API accessing the DAL, i.e. your query interface,
using TDD, however don't fall for the assumption that when you write,
say, 1000 tests, your DAL works nor think that by writing 1000 tests
and the code in the DAL necessary to run the tests, your DAL is
finished and suitable for your application.
TDD people bash the crap out of me when I tell stories like the one
above. It's up to you what to believe, but I can assure you: I know
what it takes to write a big feature rich data access framework and
it's not doable with TDD.
That's not to say unittests suck, on the contrary, however as
automated tests for the integration tests: does our code also work on
oracle? When I specify 2, 3, 4, predicates, 0 1 or 3 sort operators,
project from a projection from a projection or not, does it still work?
Though what to test? A DAL has unlimited use-cases, writing a test for
every one of them is undoable. So you've to look at the design of your
DAL. Reason about where things might go wrong and write tests for THAT.
FB
--
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website:
http://www.llblgen.com
My .NET blog:
http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------