B
Brett.Shearer.AUS
I work on a reasonably large project (if yours is bigger, please tell
me just how big) and am wondering how other people are partitioning
their assemblies/dealing with slow build times?
Our masterfiles assembly (which contains essentially just reference
file business objects) is currently 6Mb in size, and changing one line
of code causes a complete recompile.
I noticed older versions of csc.exe supported the /incremental build
option.
Why would this have been removed? It seems like the perfect solution
to really slow builds.
Is there also any reason why the compiler is not multithreaded? I have
a quad core machine that is currently using a whopping 25% cpu
utilisation.............
I watched a video the other day espousing Vista's new features, and a
Microsoft engineer said that Intel would be releasing a 60 core
machine in the next 5 years.
Does this mean I will get a 1.5% cpu utilisation when compiling in 5
years time? Or will Orcas come to the rescue?
me just how big) and am wondering how other people are partitioning
their assemblies/dealing with slow build times?
Our masterfiles assembly (which contains essentially just reference
file business objects) is currently 6Mb in size, and changing one line
of code causes a complete recompile.
I noticed older versions of csc.exe supported the /incremental build
option.
Why would this have been removed? It seems like the perfect solution
to really slow builds.
Is there also any reason why the compiler is not multithreaded? I have
a quad core machine that is currently using a whopping 25% cpu
utilisation.............
I watched a video the other day espousing Vista's new features, and a
Microsoft engineer said that Intel would be releasing a 60 core
machine in the next 5 years.
Does this mean I will get a 1.5% cpu utilisation when compiling in 5
years time? Or will Orcas come to the rescue?