Now I understand. We chose to purchase on that one:
We are just now beginning to implement Rational XDE. I haven't used it
enough to give a good review, but it does a lot of what you are talking
about.
It's a UML modeling tool, and (unlike Visio) it supports round-tripping.
XDE actually runs inside of VS.net, so the diagram surface is just another
tab in the code editor.
I can, for example, sketch out a class in a class diagram, and instantly see
the code declarations. If I put text in the documentation window for the
UML diagram, it inserts that text as a <summary> xml comment in the code.
If I modify the code directly (example: I add a helper method that doesn't
really affect the architecture of the class), the diagram is instantly
updated to show the new method.
Looks pretty cool so far -- it better be; it was not cheap.
On a more general note, the problem always seems to be in maintenance. For
some types of classes (especially those with no artificial behavior), you
can just always change the document and regenerate new code wholesale.
Often though, I seem to run into situations where some behavior of code that
you want to generate needs to be tweaked or customized in some way. Unless
the document is rich enough to catch all of the important nuances, you won't
be able to overwrite the existing code with newly generated code without
fear of losing some manually applied changes to the method bodies. On the
other hand, if the document format is rich enough to capture all of the
important details, at what point does it cease to be a document, and become
the programming language? i.e.: when does the level of detail in the
document become so rich as to drown out the clarity that you were trying to
create in the first place?
My colleagues and I have been debating this a lot lately, as we try to
figure out exactly what role our UML models will have in the organization.
The two extremes seem to be
(super weak): the drawings are really just reference material, created after
the fact
(super strong): all code is generated from the models with no manually
entered code.
Now obviously, those are at the very edges of the spectrum, and the right
balance for us lies somewhere in the middle. We've been trying to figure
out exactly where that point is.
FWIW: in our data classes (which we generate from an inhouse generator), we
found that we do have to modify behavior in some cases. Our solution was to
generate 2 levels of code: an abstract class that is "generate only". It
will always be overwritten, and hand-coded sections are not allowed (all the
strong typed field definitions, ect go here) and layer 2, the
implementation, which is auto-generated once, and then developers are
allowed to override methods etc to modify behavior. There are some inherent
problems with just doing that, because in the future the data model might
change (incremental and major upgrades), so we had to write some custom
merge code for handling changes due to modification to the database
structure. It was kind of a bummer, but we've got it working pretty well
now. Of course, Whidbey, generic types, and partial classes would have
helped us out a lot with some of this stuff.