Why doesn't C# allow incremental compilation like Java?

A

Andy

I suspect I'm in the minority because most people don't practise TDD,
and most people don't realise how lovely it is not to have to wait for
a compiler.

Well, true, but I do TDD and haven't had much issue recompiling. As I
said, it always takes me longer to code the change I wish to test than
to compile.
You've misunderstood what Eclipse does. When the save has finished, all
the classes have been compiled. However, they're still individual class
files (which makes them easy to write individually, of course). They
can still be run in that form - that's the difference between .NET and
Java. In Java although you *can* bundle classes together, you don't
have to.

Its been a while since I've done the little java I have; .classes are
the unit of deployment then?
I don't think I ever suggested that I'd recompile after changes quite
that minor (although as it's so cheap, you might as well save, and that
happens to compile). I still recompile very often (in Eclipse) though -
partly because of TDD, and partly because it just gives a warm fuzzy
feeling to *know* that you've got no compilation errors, because
everything *has* been compiled.

There are times I recompile frequently, if I'm making minor fixes to
code. It just doesn't seem to take that long for me. Perhaps this is
because I have a lot of focused assemblies though.
 
B

Barry Kelly

Andy said:
Its been a while since I've done the little java I have; .classes are
the unit of deployment then?

Jar files (aka packages) are the usual unit of deployment. Jar files are
zips of the directory structure and a manifest file, with one directory
for each element on the namespace path, so e.g. the class
java.lang.Class is in java/lang/Class.class of rt.jar (rt = runtime
jar).

-- Barry
 
J

Jon Skeet [C# MVP]

Andy said:
Ok, but I haven't had .Net project compiles take a huge amount of
time.

It depends on what you mean by "a huge amount of time". To me, 10
seconds is enough to be a distraction.
Are you talking about intellisense or the Vb.Net background compiler?
The former seems to eliminate 99% of compile errors to me. I haven't
used the latter, but I understand it DOES compile using vbc as you
type.

Right. Shame it's still VB.NET ;)
I see. I'm not sure that running a program even with compile errors
is useful though.

Rarely, indeed.
As do I. I typically have to wait longer for NUnit to reload the
assemblies than I do for the compiler though.

Well, that's effectively part of the cycle, possibly again due to the
way that .NET assemblies work (as well as due to the lack of tight
integration between VS and NUnit).
Actually I do like TDD; its just that VS lacking background /
incremental hasn't made an impact on how often I can test.

I really don't mean to be patronising (and I know perfectly well how
patronising this sounds) but it's easy to say that something doesn't
have an effect if you've never been without it. I didn't realise quite
how useful closures could be until I started working in a language
which supported them really well.
Vb.Netters complain that things like invalid casts aren't caught in C#
until they compile, but Vb.Net catches all these errors almost
immediately.

That doesn't mean it's actually being compiled - just error checked.
The important thing is whether or not it's constantly updating the
assemblies. What does it still need to do when you hit build?
I can't fault you for perfering C# over Vb though. :)
FWIW, I have heard that the Orcas will support C# background
compilation ala Vb.Net.

I'll have a look at what beta 1 does on my virtual PC.
Understood, but again, I haven't seen many places where compiling
takes all that long at all. 1 - 2 seconds is typical, except for my
forms application, but that's also adding in resources and such.

As I say, it's about 10 seconds on the solutions I'm working on at the
moment, and that's a lot longer than with a much bigger project in
Eclipse.
A forms application or straight class library? My forms application
takes a bit of time, mostly because of all the resources which get
embedded. Of course I'm planning on moving most of the forms to
library assemblies though, both to speed project compile time and
speed the loading of my application.

Class libraries and forms, or class libraries and web services,
depending on the solution.

The fact that you're thinking of changing the design of your
application even *partly* due to compilation time means that it *is*
significant to you though.
I haven't noticed a problem in the teams I've been in (solo now
though). I haven't used Eclipse, but I have used IDEs which do
incremental compiling. If the Orcas will have background compilation
for C# as I've heard, I guess I'll see then if it makes a difference.

Possibly - depending on what it really does.
I haven't used resharper yet, although I understand how much of a
benefit that functionality will provide. I'm hoping to purchase
Resharper soon.

It's lovely from what I've experienced - although I think it does make
the editor slightly less responsive. The trade-off is worth it for me.
 
J

Jon Skeet [C# MVP]

Andy said:
Well, true, but I do TDD and haven't had much issue recompiling. As I
said, it always takes me longer to code the change I wish to test than
to compile.

Yes, it still takes me longer to write the code too - but the build
cycle is enough to be a distraction and an irritant.
Its been a while since I've done the little java I have; .classes are
the unit of deployment then?

They can be. You *can* bundle them into jar files (and they're almost
always deployed that way) but you can equally not bother. Java can load
the classes just as well when they're each in a separate file as when
they're bundled together. This, unfortunately, means that there's no
equivalent of "internal" in Java.
There are times I recompile frequently, if I'm making minor fixes to
code. It just doesn't seem to take that long for me. Perhaps this is
because I have a lot of focused assemblies though.

I think it depends on quite a few factors. Even just cycling through
the projects which have no changes seems to take longer than I'd expect
it to, to be honest.
 
J

Jon Skeet [C# MVP]

Barry Kelly said:
Jar files (aka packages) are the usual unit of deployment. Jar files are
zips of the directory structure and a manifest file, with one directory
for each element on the namespace path, so e.g. the class
java.lang.Class is in java/lang/Class.class of rt.jar (rt = runtime
jar).

They're the *usual* unit of deployment - but you don't need to perform
that step just to run the application or the unit tests. Being able to
only rewrite the few small files which represent classes which have
changed is (I suspect - I haven't actually checked any of this) part of
what makes incremental compilation work so well with Java.
 
B

Barry Kelly

Jon said:
They're the *usual* unit of deployment

Sure - that's why I used the word 'usual', especially in the context of
'deployment' (as opposed to development). :)

I was just supplying information.

-- Barry
 
J

Jon Skeet [C# MVP]

Barry Kelly said:
Sure - that's why I used the word 'usual', especially in the context of
'deployment' (as opposed to development). :)

I was just supplying information.

Sure - I was just trying to make it crystal clear that it wasn't the
only way that code could be run.
 
T

tjmadden1128

That's a slightly different feature - edit and continue - which I'm
actually *not* so keen on. (I believe it encourages a "tinker until it
looks like it works" approach, which doesn't prove that it would have
worked if you'd started from scratch. Unit testing gives a better
solution to this IMO. Edit and continue is useful for getting UIs to
look right though, I'll readily admit that.)

Edit and continue is a very different feature to incremental
compilation - it's perfectly possible to have incremental compilation
without E&C, although I guess the other way round is pretty tricky.

Right, it's not quite the same, but the concept of changing something
and running a test without building and deploying is the same. I
didn't use it much because it would pop up a dialog telling me the
replacement failed.

Can you elaborate on 'slightly different' in the first paragraph, and
'very different' in the second? The underlying workings are not even
close, but the concept seems similar to me.

Tim
 
J

Jon Skeet [C# MVP]

Right, it's not quite the same, but the concept of changing something
and running a test without building and deploying is the same.

The difference is that running unit tests after incremental
compilation is basically running the tests from a clean start.
Changing the code while the app is still running and assuming that
your change wouldn't have had any consequences earlier in the app's
run is a lot riskier to me.
I didn't use it much because it would pop up a dialog telling me the
replacement failed.

That's the other problem, of course - not all changes can be done "on
the fly".
Can you elaborate on 'slightly different' in the first paragraph, and
'very different' in the second? The underlying workings are not even
close, but the concept seems similar to me.

To me it's a matter of repeatability and confidence. If I change
something based on a manual test run, I'll want to rerun the *whole*
test again after fixing it anyway, to make sure I haven't screwed
anything else up - so I might as well stop the application. The same
is true for unit tests, except they're designed to run so quickly that
you usually don't bother stopping a test run - just let it run and
fail, fix the bug, then rerun.

And as you say, the implementation of the two features is vastly
different - incremental compilation doesn't require any support from
the runtime itself, for starters.

Jon
 
A

Andy

It depends on what you mean by "a huge amount of time". To me, 10
seconds is enough to be a distraction.

My builds typically take one second or less.
Well, that's effectively part of the cycle, possibly again due to the
way that .NET assemblies work (as well as due to the lack of tight
integration between VS and NUnit).

I'm not sure it has anything to do with lack of integration,
especially considering that after upgrading to Nunit 2.4 assembly
reload DOES take significantly longer (ie, more than 15 seconds). I
think its more a function of how Nunit is built.
I really don't mean to be patronising (and I know perfectly well how
patronising this sounds) but it's easy to say that something doesn't
have an effect if you've never been without it. I didn't realise quite
how useful closures could be until I started working in a language
which supported them really well.

I did have it before though. I've been only VS for a while now, but I
have done testing and used compilers that did incremental compilation.
That doesn't mean it's actually being compiled - just error checked.
The important thing is whether or not it's constantly updating the
assemblies. What does it still need to do when you hit build?

It would seem redundant to build in error checking rules into the IDE
that the compiler already has. I must admit I don't know enough about
it to answer your question because I'm not a vb.net guy, but from what
I've read it does seem to compile it (because you can change code
while the program is running and let it continue). From what I can
gather, it would seem pretty much the same as what Eclipse would do.
I'll have a look at what beta 1 does on my virtual PC.

As will I... if I ever get time.. I'm not 100% sure if its a planned
feature or not, its something I've seen around on newsgroups though.
As I say, it's about 10 seconds on the solutions I'm working on at the
moment, and that's a lot longer than with a much bigger project in
Eclipse.

Fair enough.. but by bigger do you mean more classes, more KLOC, etc?
Class libraries and forms, or class libraries and web services,
depending on the solution.

I typically keep solutions to the main project and its test project.
The fact that you're thinking of changing the design of your
application even *partly* due to compilation time means that it *is*
significant to you though.

No, its separated due to functional area. I have two (and more to
come) applications which do have overlap in use cases. I'm trying to
keep things separate so that each application only gets the
functionality it needs and doesn't reference an assembly and not use
90% of it. I also keep my logical tiers as separate physical tiers as
well.
It's lovely from what I've experienced - although I think it does make
the editor slightly less responsive. The trade-off is worth it for me.

I'll have to see if there's a trial version then. The designer can be
slower than I'd like, but the code editor is pretty snappy.
 
J

Jon Skeet [C# MVP]

Andy said:
My builds typically take one second or less.

That would certainly explain why you don't see it as a problem - but
please believe that it *is* an issue for those of us who *don't* have a
one second build time.

Now, there's an open question as to quite *why* my builds are taking so
long. If it's that rare, it sounds like it's worth investigating a bit
further. Unfortunately I don't have any projects of any size at home
(although even my small MiscUtil project takes more than a second, if
I've made any changes).

My suspicion is that one of our pre-build steps is triggering a rebuild
of all the projects in the solution every time, which would account for
it, but I'll have to see.
I'm not sure it has anything to do with lack of integration,
especially considering that after upgrading to Nunit 2.4 assembly
reload DOES take significantly longer (ie, more than 15 seconds). I
think its more a function of how Nunit is built.

Wow - I don't think reload takes nearly that long for me, despite the
longer build time. Of course, that's another benefit of Java's "one
file per class" system - there's not a lot to load if you only use a
few classes.
I did have it before though. I've been only VS for a while now, but I
have done testing and used compilers that did incremental compilation.

If you've got a subsecond build already, then I agree there's no
significant problem (except for reloading, by the sounds of it!).
It would seem redundant to build in error checking rules into the IDE
that the compiler already has. I must admit I don't know enough about
it to answer your question because I'm not a vb.net guy, but from what
I've read it does seem to compile it (because you can change code
while the program is running and let it continue). From what I can
gather, it would seem pretty much the same as what Eclipse would do.

No - Eclipse doesn't do actual compilation until you save. It spots
some errors, but not all. For instance, if I delete a method that other
classes depend on, it won't produce any errors until I save.

In VS2005, the C# part of the IDE does some error checking without full
compilation, too. It's less work to parse the code, work out what
members are in which types, etc, than it is to do that *and* generate
the actual IL.
As will I... if I ever get time.. I'm not 100% sure if its a planned
feature or not, its something I've seen around on newsgroups though.

Right. Oh the things we could do if we had more time.
Fair enough.. but by bigger do you mean more classes, more KLOC, etc?

Bigger in just about every way. More classes, more KLOC, more projects
in the workspace (equivalent to the solution).
I typically keep solutions to the main project and its test project.

That would certainly help to keep your build time down - but it does
have other disadvantages. It sounds pretty inconvenient if you have
several projects, and have to open up that many copies of Visual Studio
if you end up changing lots of them. (Does "Go to definition" still let
you bring up source code in your situation, btw? Not having that would
be a major inconvenience for me.)
No, its separated due to functional area. I have two (and more to
come) applications which do have overlap in use cases. I'm trying to
keep things separate so that each application only gets the
functionality it needs and doesn't reference an assembly and not use
90% of it. I also keep my logical tiers as separate physical tiers as
well.

That all sounds good - but you did say that improving compile time was
one of the reasons for changing.
I'll have to see if there's a trial version then. The designer can be
slower than I'd like, but the code editor is pretty snappy.

Yup, you can download a trial for free:
http://www.jetbrains.com/resharper/
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Jon said:
Are you suggesting you never find yourself waiting for a build?

Oh yes. For a 2 or 8 hour build.

But not for a JBuilder or VS single project build.
Even 15
seconds? If a build takes 15 seconds and you want to build 20 times in
an hour (which I certainly do with TDD) that means I'm wasting 5
minutes. Taking a 5 minute break is one thing - that's a good way of
relaxing etc - but being held up for 15 seconds 20 times can be very
frustrating. It's not long enough to do anything useful (other than
maybe sip a drink) but it's long enough to irritate.

I think the key point is the irritation.

If you do get irritated, then it is a problem.

But I just relax or look at something else.
I suspect two factors contribute the level of annoyance I feel which
others apparently don't:
1) TDD really relies on a faster turnaround
2) When working with Java in Eclipse, such delays are very rare

If you're not working in a test-driven way, but instead developing
large portions of code, then building, then developing the next large
portion of code, you'll probably spend a smaller proportion of your
time building. (I'd argue you'll spend a higher proportion of your time
debugging, but there we go...)

In my experience the time spend typing in code, building, running unit
tests etc. is a very small part of creating the software.
Likewise without having experienced a really good incremental build
system, it's hard to appreciate the pain of not having one.

I have spend more time in Eclipse than in VS so ...

Arne
 
A

Andy

That would certainly explain why you don't see it as a problem - but
please believe that it *is* an issue for those of us who *don't* have a
one second build time.

Understandable, and that would probably lead me to test a bit less
often, depending on how long the build takes.
Now, there's an open question as to quite *why* my builds are taking so
long. If it's that rare, it sounds like it's worth investigating a bit
further. Unfortunately I don't have any projects of any size at home
(although even my small MiscUtil project takes more than a second, if
I've made any changes).

It could be as simple as a slow HD.
My suspicion is that one of our pre-build steps is triggering a rebuild
of all the projects in the solution every time, which would account for
it, but I'll have to see.

That very well could be the issue. As I said, I know my Forms
projects take quite a bit longer, because of all the embedded
resoruces being compiled in.
Wow - I don't think reload takes nearly that long for me, despite the
longer build time. Of course, that's another benefit of Java's "one
file per class" system - there's not a lot to load if you only use a
few classes.

It never used it; a second or two at most. As soon as I upgraded to
2.4, reload time increased. On the NUnit list, it seems other's are
experiencing some performance issues as well. So I'm not sure that
one file per class would help as much. I"m also not sure it would
work in .Net anyway; doesn't NUnit load the tests and tested assemblie
in another appdomain? Once loaded, you'd have to tear down and
recreate the entire appdomain anyway.
If you've got a subsecond build already, then I agree there's no
significant problem (except for reloading, by the sounds of it!).

And that seems to be strictly an NUnit issue, as 2.2.9 was fine.
No - Eclipse doesn't do actual compilation until you save. It spots
some errors, but not all. For instance, if I delete a method that other
classes depend on, it won't produce any errors until I save.

Got ya; saving is actually something that could take some time,
because VS will do a checkout if the file isn't already checked out by
me.. but that's just the first time. I think the vb.net ide will
catch your mentioned error though. From what I"ve heard, anyway ;-)
That would certainly help to keep your build time down - but it does
have other disadvantages. It sounds pretty inconvenient if you have
several projects, and have to open up that many copies of Visual Studio
if you end up changing lots of them. (Does "Go to definition" still let
you bring up source code in your situation, btw? Not having that would
be a major inconvenience for me.)

I usually work on one layer at a time anyway, so I don't leave my
business layer until tests and coding are done. Then I do the UI. I
don't spend much time in my data layer because I generate most of it
off my table definitions. Goto defintion works the same as if you had
done it to a framework class; you get the definition based on the
metadata. Debugging though I can still step into the referenced
assemblies, once I tell vs to load the module and point it to a pdb
and source code location (which it remembers from then on). It can be
an inconvience, but at the same time I try to use my classes as black
boxes (until I try debugging when they aren't working well together)..
build to an interface after all. ;-)
That all sounds good - but you did say that improving compile time was
one of the reasons for changing.

I don't recall where I said that, if you could point me to a post
where I said that I'll check it out. But to be clear here and now, I
never seperated anything to improve compile time; I only did it to
seperate tiers or so that a useful class can be shared among
applications (or other assemblies, if we're talking the lower level
ones). I started out knowing I'd have multiple solutions and projects
and had an idea of what I wanted where.
Yup, you can download a trial for free:http://www.jetbrains.com/resharper/

I'll go do that, thanks!
 
J

Jon Skeet [C# MVP]

Andy said:
Understandable, and that would probably lead me to test a bit less
often, depending on how long the build takes.

Fortunately, I've now made it significantly faster by fixing an issue
where a dependency kept getting rebuilt. It still takes longer than I'd
like to go through 8 projects spotting that all but one of them are up
to date, but it's a *lot* better than before.

It never used it; a second or two at most. As soon as I upgraded to
2.4, reload time increased. On the NUnit list, it seems other's are
experiencing some performance issues as well. So I'm not sure that
one file per class would help as much. I"m also not sure it would
work in .Net anyway; doesn't NUnit load the tests and tested assemblie
in another appdomain? Once loaded, you'd have to tear down and
recreate the entire appdomain anyway.

The point is that in a world without "large" assemblies, there's less
to load each time. It's not something that's likely to be changed, of
course...
And that seems to be strictly an NUnit issue, as 2.2.9 was fine.

Have you tried Resharper UnitRun? I only found out about it today - a
better way of hosting unit tests than NUnit GUI, but free :)
http://www.jetbrains.com/unitrun/

It *may* be faster at reloading things - I don't know.
Got ya; saving is actually something that could take some time,
because VS will do a checkout if the file isn't already checked out by
me.. but that's just the first time. I think the vb.net ide will
catch your mentioned error though. From what I"ve heard, anyway ;-)
:)


I usually work on one layer at a time anyway, so I don't leave my
business layer until tests and coding are done. Then I do the UI. I
don't spend much time in my data layer because I generate most of it
off my table definitions. Goto defintion works the same as if you had
done it to a framework class; you get the definition based on the
metadata. Debugging though I can still step into the referenced
assemblies, once I tell vs to load the module and point it to a pdb
and source code location (which it remembers from then on). It can be
an inconvience, but at the same time I try to use my classes as black
boxes (until I try debugging when they aren't working well together)..
build to an interface after all. ;-)

That makes sense when you only have one assembly per layer, but I tend
to have common assemblies, and occasionally thin "mini-layers" which
get their own assemblies but don't make sense to develop in isolation.
Also, making a change through all the layers is a lot easier to do when
everything's in one solution :)
I don't recall where I said that, if you could point me to a post
where I said that I'll check it out.

<quote>
Of course I'm planning on moving most of the forms to
library assemblies though, both to speed project compile time and
speed the loading of my application.
</quote>

<snip rest>
 
A

Andy

Fortunately, I've now made it significantly faster by fixing an issue
where a dependency kept getting rebuilt. It still takes longer than I'd
like to go through 8 projects spotting that all but one of them are up
to date, but it's a *lot* better than before.

Excellent, at least you've improved things for yourself.
The point is that in a world without "large" assemblies, there's less
to load each time. It's not something that's likely to be changed, of
course...

Hmm... possibly. There's so many ways to 'reload' though. I wouldn't
be suprised if many smaller files loaded slower though, as opening a
file is typically expensive, and loading the same amount of data from
one file vs. more smaller files may actually result in the one large
file being 'faster.' Of course my information could be outdated... I
remember that from the days of DOS.
Have you tried Resharper UnitRun? I only found out about it today - a
better way of hosting unit tests than NUnit GUI, but free :)http://www.jetbrains.com/unitrun/
It *may* be faster at reloading things - I don't know.

I haven't, but I did notice some unit testing commands in the menus
now.
That makes sense when you only have one assembly per layer, but I tend
to have common assemblies, and occasionally thin "mini-layers" which
get their own assemblies but don't make sense to develop in isolation.

I have multiple assemblies per layer, all seperated by functional area
(with some interfaces to allow some basic communication between the
two).
Also, making a change through all the layers is a lot easier to do when
everything's in one solution :)

Yes, that I won't be disputing.
<quote>
Of course I'm planning on moving most of the forms to
library assemblies though, both to speed project compile time and
speed the loading of my application.
</quote>

Opps... that's not quite what I meant to say. Replace "both to" with
"which would." Application startup time is something that concerns me
and there's some reuse I would like to (there are a few use cases
shared in both applications). Compile time though has not been a
factor for me in deciding what goes where.
 
J

Jon Skeet [C# MVP]

Andy said:
Excellent, at least you've improved things for yourself.

Indeed. Life became significantly more pleasant today :)
Hmm... possibly. There's so many ways to 'reload' though. I wouldn't
be suprised if many smaller files loaded slower though, as opening a
file is typically expensive, and loading the same amount of data from
one file vs. more smaller files may actually result in the one large
file being 'faster.' Of course my information could be outdated... I
remember that from the days of DOS.

That would make sense if *all* of the files eventually needed to be
loaded. With unit tests, however, you tend (at least attempt!) to only
load a few classes.

I have multiple assemblies per layer, all seperated by functional area
(with some interfaces to allow some basic communication between the
two).

I guess it depends on exactly what the problem is, but I've certainly
often been in situations where it *definitely* helps to have 5-10
projects in a solution.

Opps... that's not quite what I meant to say. Replace "both to" with
"which would." Application startup time is something that concerns me
and there's some reuse I would like to (there are a few use cases
shared in both applications). Compile time though has not been a
factor for me in deciding what goes where.

Fair enough :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top